Merge remote-tracking branch into HEAD

* keystone/mirror-android12-5.10-2023-01: (51 commits)
  ANDROID: cpu: correct dl_cpu_busy() calls
  BACKPORT: ext4: fix use-after-free in ext4_rename_dir_prepare
  ANDROID: GKI: rockchip: Update symbols
  BACKPORT: f2fs: let's avoid panic if extent_tree is not created
  BACKPORT: f2fs: should use a temp extent_info for lookup
  BACKPORT: f2fs: don't mix to use union values in extent_info
  BACKPORT: f2fs: initialize extent_cache parameter
  BACKPORT: f2fs: add block_age-based extent cache
  BACKPORT: f2fs: allocate the extent_cache by default
  BACKPORT: f2fs: refactor extent_cache to support for read and more
  BACKPORT: f2fs: remove unnecessary __init_extent_tree
  BACKPORT: f2fs: move internal functions into extent_cache.c
  BACKPORT: f2fs: specify extent cache for read explicitly
  BACKPORT: f2fs: add "c_len" into trace_f2fs_update_extent_tree_range for compressed file
  BACKPORT: f2fs: fix race condition on setting FI_NO_EXTENT flag
  BACKPORT: f2fs: extent cache: support unaligned extent
  UPSTREAM: io_uring: kill goto error handling in io_sqpoll_wait_sq()
  ANDROID: allmodconfig: disable WERROR
  UPSTREAM: Enable '-Werror' by default for all kernel builds
  ANDROID: GKI: VIVO: Add a symbol to symbol list
  ...

Change-Id: I23f6bc7da718938f9d6630ad56421df28ee268a6
This commit is contained in:
deyaoren@google.com 2023-02-02 22:10:02 +00:00
commit 6c665a5f36
51 changed files with 2743 additions and 681 deletions

View File

@ -514,3 +514,17 @@ Date: July 2021
Contact: "Daeho Jeong" <daehojeong@google.com>
Description: You can control for which gc mode the "gc_reclaimed_segments" node shows.
Refer to the description of the modes in "gc_reclaimed_segments".
What: /sys/fs/f2fs/<disk>/hot_data_age_threshold
Date: November 2022
Contact: "Ping Xiong" <xiongping1@xiaomi.com>
Description: When DATA SEPARATION is on, it controls the age threshold to indicate
the data blocks as hot. By default it was initialized as 262144 blocks
(equals to 1GB).
What: /sys/fs/f2fs/<disk>/warm_data_age_threshold
Date: November 2022
Contact: "Ping Xiong" <xiongping1@xiaomi.com>
Description: When DATA SEPARATION is on, it controls the age threshold to indicate
the data blocks as warm. By default it was initialized as 2621440 blocks
(equals to 10GB).

View File

@ -300,6 +300,10 @@ inlinecrypt When possible, encrypt/decrypt the contents of encrypted
Documentation/block/inline-encryption.rst.
atgc Enable age-threshold garbage collection, it provides high
effectiveness and efficiency on background GC.
age_extent_cache Enable an age extent cache based on rb-tree. It records
data block update frequency of the extent per inode, in
order to provide better temperature hints for data block
allocation.
======================== ============================================================
Debugfs Entries

View File

@ -0,0 +1,72 @@
-*- org -*-
It is somehow important to provide consistent interface to the
userland. LED devices have one problem there, and that is naming of
directories in /sys/class/leds. It would be nice if userland would
just know right "name" for given LED function, but situation got more
complex.
Anyway, if backwards compatibility is not an issue, new code should
use one of the "good" names from this list, and you should extend the
list where applicable.
Legacy names are listed, too; in case you are writing application that
wants to use particular feature, you should probe for good name, first,
but then try the legacy ones, too.
Notice there's a list of functions in include/dt-bindings/leds/common.h .
* Gamepads and joysticks
Game controllers may feature LEDs to indicate a player number. This is commonly
used on game consoles in which multiple controllers can be connected to a system.
The "player LEDs" are then programmed with a pattern to indicate a particular
player. For example, a game controller with 4 LEDs, may be programmed with "x---"
to indicate player 1, "-x--" to indicate player 2 etcetera where "x" means on.
Input drivers can utilize the LED class to expose the individual player LEDs
of a game controller using the function "player".
Note: tracking and management of Player IDs is the responsibility of user space,
though drivers may pick a default value.
Good: "input*:*:player-{1,2,3,4,5}
* Keyboards
Good: "input*:*:capslock"
Good: "input*:*:scrolllock"
Good: "input*:*:numlock"
Legacy: "shift-key-light" (Motorola Droid 4, capslock)
Set of common keyboard LEDs, going back to PC AT or so.
Legacy: "tpacpi::thinklight" (IBM/Lenovo Thinkpads)
Legacy: "lp5523:kb{1,2,3,4,5,6}" (Nokia N900)
Frontlight/backlight of main keyboard.
Legacy: "button-backlight" (Motorola Droid 4)
Some phones have touch buttons below screen; it is different from main
keyboard. And this is their backlight.
* Sound subsystem
Good: "platform:*:mute"
Good: "platform:*:micmute"
LEDs on notebook body, indicating that sound input / output is muted.
* System notification
Legacy: "status-led:{red,green,blue}" (Motorola Droid 4)
Legacy: "lp5523:{r,g,b}" (Nokia N900)
Phones usually have multi-color status LED.
* Power management
Good: "platform:*:charging" (allwinner sun50i)
* Screen
Good: ":backlight" (Motorola Droid 4)

View File

@ -793,6 +793,9 @@ stackp-flags-$(CONFIG_STACKPROTECTOR_STRONG) := -fstack-protector-strong
KBUILD_CFLAGS += $(stackp-flags-y)
KBUILD_CFLAGS-$(CONFIG_WERROR) += -Werror
KBUILD_CFLAGS += $(KBUILD_CFLAGS-y)
ifdef CONFIG_CC_IS_CLANG
KBUILD_CPPFLAGS += -Qunused-arguments
KBUILD_CFLAGS += -Wno-format-invalid-specifier

File diff suppressed because it is too large Load Diff

View File

@ -2297,6 +2297,7 @@
rtc_update_irq
rtc_valid_tm
rtnl_is_locked
__rtnl_link_register
__rtnl_link_unregister
rtnl_lock
rtnl_unlock

View File

@ -22,6 +22,8 @@
bdget_disk
bdput
_bin2bcd
__bitmap_and
__bitmap_andnot
blk_cleanup_queue
blk_execute_rq_nowait
blk_mq_free_request
@ -80,10 +82,13 @@
__cfi_slowpath
__check_object_size
__class_create
class_create_file_ns
class_destroy
class_for_each_device
__class_register
class_remove_file_ns
class_unregister
__ClearPageMovable
clk_bulk_disable
clk_bulk_enable
clk_bulk_prepare
@ -98,6 +103,7 @@
clk_hw_get_flags
clk_hw_get_name
clk_hw_get_parent
clk_hw_get_parent_by_index
clk_hw_get_rate
__clk_mux_determine_rate
clk_notifier_register
@ -119,6 +125,7 @@
__const_udelay
consume_skb
cpu_bit_bitmap
cpufreq_cpu_get
__cpufreq_driver_target
cpufreq_generic_suspend
cpufreq_register_governor
@ -167,18 +174,19 @@
crypto_unregister_shash
crypto_unregister_template
__crypto_xor
_ctype
debugfs_attr_read
debugfs_attr_write
debugfs_create_dir
debugfs_create_file
debugfs_create_regset32
debugfs_remove
debugfs_rename
default_llseek
delayed_work_timer_fn
del_gendisk
del_timer
del_timer_sync
desc_to_gpio
destroy_workqueue
dev_close
dev_driver_string
@ -187,6 +195,7 @@
devfreq_add_governor
devfreq_recommended_opp
devfreq_register_opp_notifier
devfreq_remove_governor
devfreq_resume_device
devfreq_suspend_device
devfreq_unregister_opp_notifier
@ -199,6 +208,7 @@
device_del
device_destroy
device_get_child_node_count
device_get_match_data
device_get_named_child_node
device_get_next_child_node
device_initialize
@ -214,6 +224,7 @@
device_remove_file
device_set_wakeup_capable
device_set_wakeup_enable
device_unregister
device_wakeup_enable
_dev_info
__dev_kfree_skb_any
@ -227,6 +238,7 @@
devm_devfreq_add_device
devm_devfreq_event_add_edev
devm_devfreq_register_opp_notifier
devm_device_add_group
devm_extcon_dev_allocate
devm_extcon_dev_register
devm_free_irq
@ -284,6 +296,7 @@
devm_snd_soc_register_component
devm_usb_get_phy
_dev_notice
dev_open
dev_pm_domain_detach
dev_pm_opp_find_freq_ceil
dev_pm_opp_find_freq_floor
@ -298,6 +311,7 @@
dev_pm_opp_register_set_opp_helper
dev_pm_opp_set_rate
dev_pm_opp_set_regulators
dev_pm_opp_set_supported_hw
dev_pm_opp_unregister_set_opp_helper
dev_printk
devres_add
@ -360,8 +374,8 @@
driver_register
driver_unregister
drm_add_edid_modes
drm_add_modes_noedid
drm_atomic_get_crtc_state
drm_atomic_get_new_bridge_state
drm_atomic_get_new_connector_for_encoder
drm_atomic_helper_bridge_destroy_state
drm_atomic_helper_bridge_duplicate_state
@ -379,8 +393,12 @@
drm_compat_ioctl
drm_connector_attach_encoder
drm_connector_cleanup
drm_connector_has_possible_encoder
drm_connector_init
drm_connector_init_with_ddc
drm_connector_list_iter_begin
drm_connector_list_iter_end
drm_connector_list_iter_next
drm_connector_unregister
drm_connector_update_edid_property
__drm_dbg
@ -398,7 +416,9 @@
drm_dp_aux_register
drm_dp_aux_unregister
drm_dp_bw_code_to_link_rate
drm_dp_channel_eq_ok
drm_dp_dpcd_read
drm_dp_dpcd_read_link_status
drm_dp_dpcd_write
drm_dp_get_phy_test_pattern
drm_dp_link_rate_to_bw_code
@ -468,6 +488,8 @@
enable_irq
eth_mac_addr
eth_platform_get_mac_address
ethtool_op_get_link
ethtool_op_get_ts_info
eth_type_trans
eth_validate_addr
event_triggers_call
@ -479,6 +501,7 @@
extcon_set_state_sync
extcon_unregister_notifier
failure_tracking
fasync_helper
fd_install
find_next_bit
find_next_zero_bit
@ -539,6 +562,7 @@
gpiod_get_value_cansleep
gpiod_set_consumer_name
gpiod_set_raw_value
gpiod_set_raw_value_cansleep
gpiod_set_value
gpiod_set_value_cansleep
gpiod_to_irq
@ -568,6 +592,7 @@
hrtimer_start_range_ns
i2c_adapter_type
i2c_add_adapter
i2c_add_numbered_adapter
i2c_del_adapter
i2c_del_driver
i2c_get_adapter
@ -583,6 +608,9 @@
i2c_smbus_xfer
i2c_transfer
i2c_transfer_buffer_flags
ida_alloc_range
ida_destroy
ida_free
idr_alloc
idr_destroy
idr_find
@ -637,7 +665,6 @@
irq_find_mapping
irq_get_irq_data
irq_modify_status
irq_of_parse_and_map
irq_set_affinity_hint
irq_set_chained_handler_and_data
irq_set_chip
@ -645,6 +672,7 @@
irq_set_chip_data
irq_set_irq_type
irq_set_irq_wake
irq_to_desc
is_vmalloc_addr
jiffies
jiffies_to_msecs
@ -660,6 +688,7 @@
kfree_const
kfree_sensitive
kfree_skb
kill_fasync
kimage_voffset
__kmalloc
kmalloc_caches
@ -681,6 +710,10 @@
kstrtouint_from_user
kstrtoull
kthread_create_on_node
kthread_create_worker
kthread_destroy_worker
kthread_flush_worker
kthread_queue_work
kthread_should_stop
kthread_stop
ktime_get
@ -698,11 +731,15 @@
__list_add_valid
__list_del_entry_valid
__local_bh_enable_ip
__lock_page
__log_post_read_mmio
__log_read_mmio
__log_write_mmio
lzo1x_decompress_safe
mdiobus_alloc_size
mdiobus_free
mdiobus_read
mdiobus_unregister
mdiobus_write
media_create_pad_link
media_device_init
@ -717,7 +754,6 @@
media_pipeline_start
media_pipeline_stop
memcpy
__memcpy_fromio
memdup_user
memmove
memset
@ -736,7 +772,9 @@
mipi_dsi_host_unregister
misc_deregister
misc_register
mmc_cqe_request_done
mmc_of_parse
mmc_request_done
__mmdrop
mod_delayed_work_on
mod_timer
@ -752,10 +790,14 @@
mutex_lock_interruptible
mutex_trylock
mutex_unlock
napi_gro_receive
__netdev_alloc_skb
netdev_err
netdev_info
netdev_update_features
netdev_warn
netif_carrier_off
netif_carrier_on
netif_rx
netif_rx_ni
netif_tx_wake_queue
@ -788,6 +830,7 @@
of_device_is_available
of_device_is_compatible
of_drm_find_bridge
of_drm_find_panel
of_find_compatible_node
of_find_device_by_node
of_find_i2c_device_by_node
@ -806,6 +849,7 @@
of_get_parent
of_get_property
of_get_regulator_init_data
of_graph_get_endpoint_by_regs
of_graph_get_next_endpoint
of_graph_get_remote_node
of_graph_get_remote_port_parent
@ -849,6 +893,7 @@
perf_trace_buf_alloc
perf_trace_run_bpf_submit
pfn_valid
phy_attached_info
phy_configure
phy_drivers_register
phy_drivers_unregister
@ -887,6 +932,7 @@
platform_driver_unregister
platform_get_irq
platform_get_irq_byname
platform_get_irq_optional
platform_get_resource
platform_get_resource_byname
platform_irq_count
@ -913,6 +959,7 @@
power_supply_changed
power_supply_class
power_supply_get_battery_info
power_supply_get_by_name
power_supply_get_by_phandle
power_supply_get_drvdata
power_supply_get_property
@ -936,6 +983,7 @@
put_disk
__put_page
__put_task_struct
put_unused_fd
pwm_adjust_config
pwm_apply_state
queue_delayed_work_on
@ -946,6 +994,7 @@
_raw_spin_lock_bh
_raw_spin_lock_irq
_raw_spin_lock_irqsave
_raw_spin_trylock
_raw_spin_unlock
_raw_spin_unlock_bh
_raw_spin_unlock_irq
@ -964,6 +1013,7 @@
__register_chrdev
register_chrdev_region
register_inetaddr_notifier
register_netdev
register_netdevice
register_netdevice_notifier
register_pm_notifier
@ -1037,6 +1087,7 @@
scsi_ioctl_block_when_processing_errors
sdev_prefix_printk
sdhci_add_host
sdhci_execute_tuning
sdhci_get_property
sdhci_pltfm_clk_get_max_clock
sdhci_pltfm_free
@ -1052,6 +1103,7 @@
seq_puts
seq_read
set_page_dirty_lock
__SetPageMovable
sg_alloc_table
sg_alloc_table_from_pages
sg_free_table
@ -1068,6 +1120,7 @@
simple_strtoul
single_open
single_release
skb_add_rx_frag
skb_clone
skb_copy
skb_copy_bits
@ -1084,6 +1137,7 @@
skcipher_walk_virt
snd_pcm_format_width
snd_soc_add_component_controls
snd_soc_add_dai_controls
snd_soc_card_jack_new
snd_soc_component_read
snd_soc_component_set_jack
@ -1116,6 +1170,7 @@
snd_soc_pm_ops
snd_soc_put_enum_double
snd_soc_put_volsw
snd_soc_register_component
snd_soc_unregister_component
snprintf
sort
@ -1125,6 +1180,7 @@
sscanf
__stack_chk_fail
__stack_chk_guard
strcasecmp
strchr
strcmp
strcpy
@ -1146,6 +1202,7 @@
sync_file_create
sync_file_get_fence
synchronize_irq
synchronize_net
synchronize_rcu
syscon_node_to_regmap
syscon_regmap_lookup_by_phandle
@ -1158,6 +1215,7 @@
sysfs_remove_link
sysfs_streq
system_freezable_wq
system_highpri_wq
system_long_wq
system_power_efficient_wq
system_state
@ -1190,9 +1248,11 @@
typec_switch_register
typec_switch_unregister
__udelay
unlock_page
__unregister_chrdev
unregister_chrdev_region
unregister_inetaddr_notifier
unregister_netdev
unregister_netdevice_notifier
unregister_netdevice_queue
unregister_reboot_notifier
@ -1388,7 +1448,6 @@
# required by bcmdhd.ko
alloc_etherdev_mqs
complete_and_exit
dev_open
down_interruptible
down_timeout
iwe_stream_add_event
@ -1399,9 +1458,6 @@
mmc_set_data_timeout
mmc_sw_reset
mmc_wait_for_req
netdev_update_features
netif_napi_add
__netif_napi_del
netif_set_xps_queue
__netlink_kernel_create
netlink_kernel_release
@ -1410,7 +1466,6 @@
__nlmsg_put
_raw_read_lock_bh
_raw_read_unlock_bh
register_netdev
sched_set_fifo_low
sdio_claim_host
sdio_disable_func
@ -1444,16 +1499,15 @@
strcat
strspn
sys_tz
unregister_netdev
unregister_pm_notifier
wireless_send_event
# required by bifrost_kbase.ko
__arch_clear_user
__bitmap_andnot
__bitmap_equal
__bitmap_or
__bitmap_weight
__bitmap_xor
cache_line_size
clear_page
complete_all
@ -1503,7 +1557,6 @@
simple_open
strcspn
system_freezing_cnt
system_highpri_wq
_totalram_pages
__traceiter_gpu_mem_total
trace_output_call
@ -1515,9 +1568,6 @@
vmalloc_user
vmf_insert_pfn_prot
# required by bq25700_charger.ko
power_supply_get_by_name
# required by cdc-wdm.ko
cdc_parse_cdc_header
@ -1535,8 +1585,6 @@
# required by cfg80211.ko
bpf_trace_run10
_ctype
debugfs_rename
dev_change_net_namespace
__dev_get_by_index
dev_get_by_index
@ -1569,7 +1617,6 @@
rfkill_blocked
rfkill_pause_polling
rfkill_resume_polling
skb_add_rx_frag
__sock_create
sock_release
unregister_pernet_device
@ -1596,7 +1643,6 @@
# required by clk-rockchip-regmap.ko
clk_hw_get_num_parents
clk_hw_get_parent_by_index
divider_recalc_rate
divider_round_rate_parent
@ -1608,6 +1654,7 @@
__clk_get_hw
clk_hw_register_composite
clk_hw_round_rate
clk_hw_set_parent
clk_mux_ops
clk_mux_ro_ops
clk_register_divider_table
@ -1627,11 +1674,6 @@
# required by cm3218.ko
i2c_smbus_write_word_data
# required by cma_heap.ko
cma_get_name
dma_heap_get_drvdata
dma_heap_put
# required by cpufreq-dt.ko
cpufreq_enable_boost_support
cpufreq_freq_attr_scaling_available_freqs
@ -1666,7 +1708,6 @@
# required by cqhci.ko
devm_blk_ksm_init
mmc_cqe_request_done
# required by cryptodev.ko
crypto_ahash_final
@ -1677,7 +1718,11 @@
sg_last
unregister_sysctl_table
# required by cw221x_battery.ko
power_supply_is_system_supplied
# required by display-connector.ko
drm_atomic_get_new_bridge_state
drm_probe_ddc
# required by dm9601.ko
@ -1695,7 +1740,6 @@
# required by dw-hdmi.ko
drm_connector_attach_max_bpc_property
drm_default_rgb_quant_range
of_graph_get_endpoint_by_regs
# required by dw-mipi-dsi.ko
drm_panel_bridge_add_typed
@ -1721,14 +1765,12 @@
mmc_regulator_set_ocr
mmc_regulator_set_vqmmc
mmc_remove_host
mmc_request_done
sdio_signal_irq
sg_miter_next
sg_miter_start
sg_miter_stop
# required by dw_wdt.ko
platform_get_irq_optional
watchdog_init_timeout
watchdog_register_device
watchdog_set_restart_priority
@ -1739,7 +1781,6 @@
bitmap_find_next_zero_area_off
__bitmap_set
phy_reset
_raw_spin_trylock
usb_add_gadget_udc
usb_del_gadget_udc
usb_ep_set_maxpacket_limit
@ -1758,9 +1799,15 @@
usb_speed_string
usb_wakeup_enabled_descendants
# required by dwmac-rockchip.ko
csum_tcpudp_nofold
ip_send_check
of_get_phy_mode
# required by fusb302.ko
extcon_get_extcon_dev
fwnode_create_software_node
sched_set_fifo
tcpm_cc_change
tcpm_pd_hard_reset
tcpm_pd_receive
@ -1812,6 +1859,7 @@
i2c_verify_client
# required by i2c-gpio.ko
desc_to_gpio
i2c_bit_add_numbered_bus
# required by i2c-hid.ko
@ -1822,7 +1870,6 @@
hid_parse_report
# required by i2c-mux.ko
i2c_add_numbered_adapter
__i2c_transfer
rt_mutex_lock
rt_mutex_trylock
@ -1880,7 +1927,6 @@
dev_fetch_sw_netstats
dev_queue_xmit
ether_setup
ethtool_op_get_link
get_random_u32
__hw_addr_init
__hw_addr_sync
@ -1889,10 +1935,7 @@
kernel_param_unlock
kfree_skb_list
ktime_get_seconds
napi_gro_receive
netdev_set_default_ethtool_ops
netif_carrier_off
netif_carrier_on
netif_receive_skb
netif_receive_skb_list
netif_tx_stop_all_queues
@ -1918,7 +1961,6 @@
skb_queue_head
skb_queue_purge
skb_queue_tail
synchronize_net
unregister_inet6addr_notifier
unregister_netdevice_many
@ -1955,9 +1997,6 @@
dev_pm_qos_expose_latency_tolerance
dev_pm_qos_hide_latency_tolerance
dev_pm_qos_update_user_latency_tolerance
ida_alloc_range
ida_destroy
ida_free
init_srcu_struct
memchr_inv
param_ops_ulong
@ -2049,7 +2088,6 @@
__arm_smccc_hvc
bus_for_each_dev
device_register
device_unregister
free_pages_exact
memremap
memunmap
@ -2076,6 +2114,7 @@
# required by pcie-dw-rockchip.ko
cpumask_next_and
debugfs_create_devm_seqfile
dw_pcie_find_ext_capability
dw_pcie_host_init
dw_pcie_link_up
@ -2106,7 +2145,6 @@
extcon_sync
# required by phy-rockchip-inno-usb3.ko
strcasecmp
usb_add_phy
# required by phy-rockchip-samsung-hdptx-hdmi.ko
@ -2153,10 +2191,25 @@
clk_bulk_put
of_genpd_add_provider_onecell
panic
param_get_bool
param_set_bool
pm_clk_add_clk
pm_genpd_add_subdomain
pm_genpd_init
pm_genpd_remove
pm_wq
# required by pps_core.ko
kobject_get
# required by ptp.ko
kthread_cancel_delayed_work_sync
kthread_delayed_work_timer_fn
kthread_mod_delayed_work
kthread_queue_delayed_work
ktime_get_snapshot
posix_clock_register
posix_clock_unregister
# required by pwm-regulator.ko
regulator_map_voltage_iterate
@ -2173,6 +2226,13 @@
pwm_free
pwm_request
# required by pwrseq_simple.ko
bitmap_alloc
devm_gpiod_get_array
gpiod_set_array_value_cansleep
mmc_pwrseq_register
mmc_pwrseq_unregister
# required by reboot-mode.ko
devres_release
kernel_kobj
@ -2186,6 +2246,7 @@
alloc_iova_fast
dma_fence_wait_timeout
free_iova_fast
idr_alloc_cyclic
kstrdup_quotable_cmdline
mmput
@ -2194,9 +2255,6 @@
irq_domain_xlate_onetwocell
irq_set_parent
# required by rk628_dsi.ko
of_drm_find_panel
# required by rk805-pwrkey.ko
devm_request_any_context_irq
@ -2221,6 +2279,10 @@
# required by rk860x-regulator.ko
regulator_suspend_enable
# required by rk_cma_heap.ko
dma_heap_get_drvdata
dma_heap_put
# required by rk_crypto.ko
crypto_ahash_digest
crypto_dequeue_request
@ -2249,8 +2311,15 @@
# required by rk_ircut.ko
drain_workqueue
# required by rk_system_heap.ko
deferred_free
dmabuf_page_pool_alloc
dmabuf_page_pool_create
dmabuf_page_pool_destroy
dmabuf_page_pool_free
swiotlb_max_segment
# required by rk_vcodec.ko
devfreq_remove_governor
devm_iounmap
dev_pm_domain_attach
dev_pm_opp_get_freq
@ -2260,9 +2329,7 @@
__fdget
iommu_device_unregister
iommu_dma_reserve_iova
kthread_flush_worker
__kthread_init_worker
kthread_queue_work
kthread_worker_fn
of_device_alloc
of_dma_configure_id
@ -2287,7 +2354,18 @@
# required by rockchip-cpufreq.ko
cpufreq_unregister_notifier
dev_pm_opp_put_prop_name
dev_pm_opp_set_supported_hw
# required by rockchip-hdmirx.ko
cec_s_phys_addr_from_edid
cpu_latency_qos_remove_request
device_create_with_groups
of_reserved_mem_device_release
v4l2_ctrl_log_status
v4l2_ctrl_subscribe_event
v4l2_find_dv_timings_cap
v4l2_src_change_event_subscribe
vb2_dma_contig_memops
vb2_fop_read
# required by rockchip-rng.ko
devm_hwrng_register
@ -2296,8 +2374,10 @@
# required by rockchip_bus.ko
cpu_topology
# required by rockchip_debug.ko
nr_irqs
# required by rockchip_dmc.ko
cpufreq_cpu_get
cpufreq_cpu_put
cpufreq_quick_get
devfreq_event_disable_edev
@ -2330,7 +2410,6 @@
regulator_get_linear_step
# required by rockchip_pwm_remotectl.ko
irq_to_desc
__tasklet_hi_schedule
# required by rockchip_saradc.ko
@ -2368,6 +2447,7 @@
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_plane_state
drm_atomic_helper_bridge_propagate_bus_fmt
drm_atomic_helper_check
drm_atomic_helper_check_plane_state
drm_atomic_helper_cleanup_planes
@ -2398,8 +2478,9 @@
drm_atomic_set_mode_for_crtc
drm_atomic_state_alloc
__drm_atomic_state_free
drm_bridge_chain_mode_set
drm_bridge_get_edid
drm_connector_has_possible_encoder
drm_connector_attach_content_protection_property
drm_connector_list_iter_begin
drm_connector_list_iter_end
drm_connector_list_iter_next
@ -2416,9 +2497,7 @@
drm_crtc_vblank_put
drm_debugfs_create_files
drm_do_get_edid
drm_dp_channel_eq_ok
drm_dp_clock_recovery_ok
drm_dp_dpcd_read_link_status
drm_dp_get_adjust_request_pre_emphasis
drm_dp_get_adjust_request_voltage
drm_dp_read_desc
@ -2450,6 +2529,7 @@
drm_gem_unmap_dma_buf
drm_get_format_info
drm_get_format_name
drm_hdcp_update_content_protection
drm_helper_mode_fill_fb_struct
drm_kms_helper_poll_enable
drm_kms_helper_poll_fini
@ -2475,6 +2555,7 @@
drm_mode_prune_invalid
drm_mode_set_crtcinfo
drm_modeset_lock_all
drm_modeset_unlock
drm_modeset_unlock_all
drm_mode_sort
drm_mode_validate_size
@ -2488,6 +2569,7 @@
drm_plane_create_zpos_property
drm_prime_get_contiguous_size
__drm_printfn_seq_file
drm_property_blob_put
drm_property_create
drm_property_create_bitmask
drm_property_create_bool
@ -2495,6 +2577,8 @@
drm_property_create_object
drm_property_create_range
drm_property_destroy
drm_property_lookup_blob
drm_property_replace_blob
__drm_puts_seq_file
drm_rect_calc_hscale
drm_self_refresh_helper_cleanup
@ -2536,23 +2620,18 @@
sdhci_cqe_irq
sdhci_dumpregs
sdhci_enable_clk
sdhci_execute_tuning
sdhci_pltfm_unregister
sdhci_set_power_and_bus_voltage
sdhci_set_uhs_signaling
sdhci_setup_host
# required by sdhci-of-dwcmshc.ko
device_get_match_data
devm_clk_bulk_get_optional
dma_get_required_mask
sdhci_adma_write_desc
sdhci_remove_host
sdhci_request
# required by sensor_dev.ko
class_create_file_ns
class_remove_file_ns
sdhci_reset_tuning
# required by sg.ko
blk_get_request
@ -2561,10 +2640,8 @@
blk_verify_command
cdev_alloc
class_interface_unregister
fasync_helper
get_sg_io_hdr
import_iovec
kill_fasync
put_sg_io_hdr
_raw_read_lock_irqsave
_raw_read_unlock_irqrestore
@ -2592,12 +2669,7 @@
# required by smsc95xx.ko
csum_partial
ethtool_op_get_ts_info
mdiobus_alloc_size
mdiobus_free
__mdiobus_register
mdiobus_unregister
phy_attached_info
phy_connect_direct
phy_disconnect
phy_ethtool_get_link_ksettings
@ -2615,9 +2687,6 @@
# required by snd-soc-es8316.ko
snd_pcm_hw_constraint_list
# required by snd-soc-es8326.ko
snd_soc_register_component
# required by snd-soc-hdmi-codec.ko
snd_ctl_add
snd_ctl_new1
@ -2626,7 +2695,7 @@
snd_pcm_fill_iec958_consumer
snd_pcm_fill_iec958_consumer_hw_params
snd_pcm_hw_constraint_eld
snd_pcm_stop_xrun
snd_pcm_stop
# required by snd-soc-rk817.ko
snd_soc_component_exit_regmap
@ -2637,7 +2706,8 @@
# required by snd-soc-rockchip-i2s-tdm.ko
clk_is_match
snd_soc_add_dai_controls
pm_runtime_forbid
snd_pcm_stop_xrun
# required by snd-soc-rockchip-i2s.ko
of_prop_next_string
@ -2647,8 +2717,10 @@
snd_soc_jack_add_zones
snd_soc_jack_get_type
# required by snd-soc-rockchip-spdif.ko
snd_pcm_create_iec958_consumer_hw_params
# required by snd-soc-rt5640.ko
gpiod_set_raw_value_cansleep
regmap_register_patch
snd_soc_dapm_force_bias_level
@ -2676,19 +2748,75 @@
spi_setup
stream_open
# required by stmmac-platform.ko
device_get_phy_mode
of_get_mac_address
of_phy_is_fixed_link
platform_get_irq_byname_optional
# required by stmmac.ko
devm_alloc_etherdev_mqs
dql_completed
dql_reset
ethtool_convert_legacy_u32_to_link_mode
ethtool_convert_link_mode_to_legacy_u32
flow_block_cb_setup_simple
flow_rule_match_basic
flow_rule_match_ipv4_addrs
flow_rule_match_ports
mdiobus_get_phy
__napi_alloc_skb
napi_complete_done
napi_disable
__napi_schedule
napi_schedule_prep
netdev_alert
netdev_pick_tx
netdev_rss_key_fill
netif_device_attach
netif_device_detach
netif_napi_add
__netif_napi_del
netif_schedule_queue
netif_set_real_num_rx_queues
netif_set_real_num_tx_queues
of_mdiobus_register
page_pool_alloc_pages
page_pool_create
page_pool_destroy
page_pool_put_page
page_pool_release_page
phy_init_eee
phylink_connect_phy
phylink_create
phylink_destroy
phylink_disconnect_phy
phylink_ethtool_get_eee
phylink_ethtool_get_pauseparam
phylink_ethtool_get_wol
phylink_ethtool_ksettings_get
phylink_ethtool_ksettings_set
phylink_ethtool_nway_reset
phylink_ethtool_set_eee
phylink_ethtool_set_pauseparam
phylink_ethtool_set_wol
phylink_get_eee_err
phylink_mac_change
phylink_mii_ioctl
phylink_of_phy_connect
phylink_set_port_modes
phylink_speed_down
phylink_speed_up
phylink_start
phylink_stop
pm_wakeup_dev_event
reset_control_reset
skb_tstamp_tx
# required by sw_sync.ko
dma_fence_free
dma_fence_signal_locked
__get_task_comm
put_unused_fd
# required by system_heap.ko
deferred_free
dmabuf_page_pool_alloc
dmabuf_page_pool_create
dmabuf_page_pool_destroy
dmabuf_page_pool_free
swiotlb_max_segment
# required by tcpci_husb311.ko
tcpci_get_tcpm_port
@ -2712,6 +2840,7 @@
# required by timer-rockchip.ko
clockevents_config_and_register
irq_of_parse_and_map
# required by tps65132-regulator.ko
regulator_set_active_discharge_regmap
@ -2792,6 +2921,7 @@
# required by video_rkisp.ko
media_device_cleanup
__memcpy_fromio
__memcpy_toio
param_ops_ullong
v4l2_pipeline_link_notify
@ -2831,7 +2961,6 @@
# required by zsmalloc.ko
alloc_anon_inode
__ClearPageMovable
contig_page_data
dec_zone_page_state
inc_zone_page_state
@ -2840,11 +2969,8 @@
kern_mount
kern_unmount
kill_anon_super
__lock_page
page_mapping
_raw_read_lock
_raw_read_unlock
_raw_write_lock
_raw_write_unlock
__SetPageMovable
unlock_page

View File

@ -277,6 +277,7 @@
del_gendisk
del_timer
del_timer_sync
dentry_path_raw
desc_to_gpio
destroy_workqueue
dev_coredumpv

View File

@ -200,3 +200,11 @@
wakeup_sources_read_unlock
wakeup_sources_walk_start
wakeup_sources_walk_next
#required by mi_mempool.ko module
__traceiter_android_vh_mmput
__tracepoint_android_vh_mmput
__traceiter_android_vh_alloc_pages_reclaim_bypass
__tracepoint_android_vh_alloc_pages_reclaim_bypass
__traceiter_android_vh_alloc_pages_failure_bypass
__tracepoint_android_vh_alloc_pages_failure_bypass

View File

@ -513,6 +513,7 @@ CONFIG_MMC_CRYPTO=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_LEDS_CLASS_FLASH=y
CONFIG_LEDS_CLASS_MULTICOLOR=y
CONFIG_LEDS_TRIGGER_TIMER=y
CONFIG_LEDS_TRIGGER_TRANSIENT=y
CONFIG_EDAC=y

View File

@ -50,10 +50,16 @@ CONFIG_DRM_PANEL_SIMPLE=m
CONFIG_DRM_RK1000_TVE=m
CONFIG_DRM_RK630_TVE=m
CONFIG_DRM_ROCKCHIP=m
CONFIG_DRM_ROCKCHIP_RK618=m
CONFIG_DRM_ROCKCHIP_RK628=m
CONFIG_DRM_ROHM_BU18XL82=m
CONFIG_DRM_SII902X=m
CONFIG_DTC_SYMBOLS=y
# CONFIG_DWMAC_GENERIC is not set
# CONFIG_DWMAC_IPQ806X is not set
# CONFIG_DWMAC_QCOM_ETHQOS is not set
# CONFIG_DWMAC_SUN8I is not set
# CONFIG_DWMAC_SUNXI is not set
CONFIG_DW_WATCHDOG=m
CONFIG_GPIO_ROCKCHIP=m
CONFIG_GREENASIA_FF=y
@ -146,6 +152,7 @@ CONFIG_MALI_BIFROST_EXPERT=y
CONFIG_MALI_CSF_SUPPORT=y
CONFIG_MALI_PLATFORM_NAME="rk"
CONFIG_MALI_PWRSOFT_765=y
CONFIG_MFD_RK618=m
CONFIG_MFD_RK628=m
CONFIG_MFD_RK630_I2C=m
CONFIG_MFD_RK806_SPI=m
@ -186,6 +193,7 @@ CONFIG_PROXIMITY_DEVICE=m
CONFIG_PS_STK3410=m
CONFIG_PS_UCS14620=m
CONFIG_PWM_ROCKCHIP=m
CONFIG_PWRSEQ_SIMPLE=m
CONFIG_REGULATOR_ACT8865=m
CONFIG_REGULATOR_FAN53555=m
CONFIG_REGULATOR_GPIO=m
@ -236,6 +244,7 @@ CONFIG_ROCKCHIP_PM_DOMAINS=m
CONFIG_ROCKCHIP_PVTM=m
CONFIG_ROCKCHIP_REMOTECTL=m
CONFIG_ROCKCHIP_REMOTECTL_PWM=m
CONFIG_ROCKCHIP_MULTI_RGA=m
CONFIG_ROCKCHIP_RGB=y
CONFIG_ROCKCHIP_RKNPU=m
CONFIG_ROCKCHIP_SARADC=m
@ -278,6 +287,7 @@ CONFIG_SND_SOC_RT5640=m
CONFIG_SND_SOC_SPDIF=m
CONFIG_SPI_ROCKCHIP=m
CONFIG_SPI_SPIDEV=m
CONFIG_STMMAC_ETH=m
CONFIG_SW_SYNC=m
CONFIG_SYSCON_REBOOT_MODE=m
CONFIG_TEE=m
@ -328,8 +338,8 @@ CONFIG_VIDEO_RK628_BT1120=m
CONFIG_VIDEO_RK628_CSI=m
CONFIG_VIDEO_RK_IRCUT=m
CONFIG_VIDEO_ROCKCHIP_CIF=m
CONFIG_VIDEO_ROCKCHIP_HDMIRX=m
CONFIG_VIDEO_ROCKCHIP_ISP=m
CONFIG_VIDEO_ROCKCHIP_ISPP=m
CONFIG_VIDEO_S5K3L6XX=m
CONFIG_VIDEO_S5KJN1=m
CONFIG_VIDEO_SGM3784=m

View File

@ -464,6 +464,7 @@ CONFIG_MMC_CRYPTO=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_LEDS_CLASS_FLASH=y
CONFIG_LEDS_CLASS_MULTICOLOR=y
CONFIG_LEDS_TRIGGER_TIMER=y
CONFIG_LEDS_TRIGGER_TRANSIENT=y
CONFIG_EDAC=y

View File

@ -9,6 +9,7 @@ function update_config() {
-d CPU_BIG_ENDIAN \
-d DYNAMIC_FTRACE \
-e UNWINDER_FRAME_POINTER \
-d WERROR \
(cd ${OUT_DIR} && \
make O=${OUT_DIR} $archsubarch CROSS_COMPILE=${CROSS_COMPILE} "${TOOL_ARGS[@]}" ${MAKE_ARGS} olddefconfig)

View File

@ -214,7 +214,7 @@ crypto-fips-objs := drbg.o ecb.o cbc.o ctr.o cts.o gcm.o xts.o hmac.o cmac.o \
gf128mul.o aes_generic.o lib-crypto-aes.o \
jitterentropy.o jitterentropy-kcapi.o \
sha1_generic.o sha256_generic.o sha512_generic.o \
lib-sha1.o lib-crypto-sha256.o
lib-memneq.o lib-sha1.o lib-crypto-sha256.o
crypto-fips-objs := $(foreach o,$(crypto-fips-objs),$(o:.o=-fips.o))
# get the arch to add its objects to $(crypto-fips-objs)

View File

@ -464,7 +464,10 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_alloc_si);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_si);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_pages);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_shmem_page_flag);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mmput);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_sched_pelt_multiplier);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_pages_reclaim_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_pages_failure_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_page_look_around_ref);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around_migrate_page);

View File

@ -5,6 +5,7 @@ menu "Display Engine Configuration"
config DRM_AMD_DC
bool "AMD DC - Enable new display engine"
default y
depends on BROKEN || !CC_IS_CLANG || X86_64 || SPARC64 || ARM64
select SND_HDA_COMPONENT if SND_HDA_CORE
select DRM_AMD_DC_DCN if (X86 || PPC64) && !(KCOV_INSTRUMENT_ALL && KCOV_ENABLE_COMPARISONS)
help
@ -12,6 +13,12 @@ config DRM_AMD_DC
support for AMDGPU. This adds required support for Vega and
Raven ASICs.
calculate_bandwidth() is presently broken on all !(X86_64 || SPARC64 || ARM64)
architectures built with Clang (all released versions), whereby the stack
frame gets blown up to well over 5k. This would cause an immediate kernel
panic on most architectures. We'll revert this when the following bug report
has been resolved: https://github.com/llvm/llvm-project/issues/41896.
config DRM_AMD_DC_DCN
def_bool n
help

View File

@ -1087,6 +1087,7 @@
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_2 0x09cc
#define USB_DEVICE_ID_SONY_PS4_CONTROLLER_DONGLE 0x0ba0
#define USB_DEVICE_ID_SONY_PS5_CONTROLLER 0x0ce6
#define USB_DEVICE_ID_SONY_PS5_CONTROLLER_2 0x0df2
#define USB_DEVICE_ID_SONY_MOTION_CONTROLLER 0x03d5
#define USB_DEVICE_ID_SONY_NAVIGATION_CONTROLLER 0x042f
#define USB_DEVICE_ID_SONY_BUZZ_CONTROLLER 0x0002

View File

@ -11,6 +11,8 @@
#include <linux/hid.h>
#include <linux/idr.h>
#include <linux/input/mt.h>
#include <linux/leds.h>
#include <linux/led-class-multicolor.h>
#include <linux/module.h>
#include <asm/unaligned.h>
@ -38,11 +40,13 @@ struct ps_device {
uint8_t battery_capacity;
int battery_status;
const char *input_dev_name; /* Name of primary input device. */
uint8_t mac_address[6]; /* Note: stored in little endian order. */
uint32_t hw_version;
uint32_t fw_version;
int (*parse_report)(struct ps_device *dev, struct hid_report *report, u8 *data, int size);
void (*remove)(struct ps_device *dev);
};
/* Calibration data for playstation motion sensors. */
@ -53,6 +57,13 @@ struct ps_calibration_data {
int sens_denom;
};
struct ps_led_info {
const char *name;
const char *color;
enum led_brightness (*brightness_get)(struct led_classdev *cdev);
int (*brightness_set)(struct led_classdev *cdev, enum led_brightness);
};
/* Seed values for DualShock4 / DualSense CRC32 for different report types. */
#define PS_INPUT_CRC32_SEED 0xA1
#define PS_OUTPUT_CRC32_SEED 0xA2
@ -97,6 +108,9 @@ struct ps_calibration_data {
#define DS_STATUS_CHARGING GENMASK(7, 4)
#define DS_STATUS_CHARGING_SHIFT 4
/* Feature version from DualSense Firmware Info report. */
#define DS_FEATURE_VERSION(major, minor) ((major & 0xff) << 8 | (minor & 0xff))
/*
* Status of a DualSense touch point contact.
* Contact IDs, with highest bit set are 'inactive'
@ -115,6 +129,7 @@ struct ps_calibration_data {
#define DS_OUTPUT_VALID_FLAG1_RELEASE_LEDS BIT(3)
#define DS_OUTPUT_VALID_FLAG1_PLAYER_INDICATOR_CONTROL_ENABLE BIT(4)
#define DS_OUTPUT_VALID_FLAG2_LIGHTBAR_SETUP_CONTROL_ENABLE BIT(1)
#define DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2 BIT(2)
#define DS_OUTPUT_POWER_SAVE_CONTROL_MIC_MUTE BIT(4)
#define DS_OUTPUT_LIGHTBAR_SETUP_LIGHT_OUT BIT(1)
@ -132,6 +147,9 @@ struct dualsense {
struct input_dev *sensors;
struct input_dev *touchpad;
/* Update version is used as a feature/capability version. */
uint16_t update_version;
/* Calibration data for accelerometer and gyroscope. */
struct ps_calibration_data accel_calib_data[3];
struct ps_calibration_data gyro_calib_data[3];
@ -142,11 +160,13 @@ struct dualsense {
uint32_t sensor_timestamp_us;
/* Compatible rumble state */
bool use_vibration_v2;
bool update_rumble;
uint8_t motor_left;
uint8_t motor_right;
/* RGB lightbar */
struct led_classdev_mc lightbar;
bool update_lightbar;
uint8_t lightbar_red;
uint8_t lightbar_green;
@ -163,6 +183,7 @@ struct dualsense {
struct led_classdev player_leds[5];
struct work_struct output_worker;
bool output_worker_initialized;
void *output_report_dmabuf;
uint8_t output_seq; /* Sequence number for output report. */
};
@ -288,6 +309,9 @@ static const struct {int x; int y; } ps_gamepad_hat_mapping[] = {
{0, 0},
};
static inline void dualsense_schedule_work(struct dualsense *ds);
static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t green, uint8_t blue);
/*
* Add a new ps_device to ps_devices if it doesn't exist.
* Return error on duplicate device, which can happen if the same
@ -525,6 +549,71 @@ static int ps_get_report(struct hid_device *hdev, uint8_t report_id, uint8_t *bu
return 0;
}
static int ps_led_register(struct ps_device *ps_dev, struct led_classdev *led,
const struct ps_led_info *led_info)
{
int ret;
led->name = devm_kasprintf(&ps_dev->hdev->dev, GFP_KERNEL,
"%s:%s:%s", ps_dev->input_dev_name, led_info->color, led_info->name);
if (!led->name)
return -ENOMEM;
led->brightness = 0;
led->max_brightness = 1;
led->flags = LED_CORE_SUSPENDRESUME;
led->brightness_get = led_info->brightness_get;
led->brightness_set_blocking = led_info->brightness_set;
ret = devm_led_classdev_register(&ps_dev->hdev->dev, led);
if (ret) {
hid_err(ps_dev->hdev, "Failed to register LED %s: %d\n", led_info->name, ret);
return ret;
}
return 0;
}
/* Register a DualSense/DualShock4 RGB lightbar represented by a multicolor LED. */
static int ps_lightbar_register(struct ps_device *ps_dev, struct led_classdev_mc *lightbar_mc_dev,
int (*brightness_set)(struct led_classdev *, enum led_brightness))
{
struct hid_device *hdev = ps_dev->hdev;
struct mc_subled *mc_led_info;
struct led_classdev *led_cdev;
int ret;
mc_led_info = devm_kmalloc_array(&hdev->dev, 3, sizeof(*mc_led_info),
GFP_KERNEL | __GFP_ZERO);
if (!mc_led_info)
return -ENOMEM;
mc_led_info[0].color_index = LED_COLOR_ID_RED;
mc_led_info[1].color_index = LED_COLOR_ID_GREEN;
mc_led_info[2].color_index = LED_COLOR_ID_BLUE;
lightbar_mc_dev->subled_info = mc_led_info;
lightbar_mc_dev->num_colors = 3;
led_cdev = &lightbar_mc_dev->led_cdev;
led_cdev->name = devm_kasprintf(&hdev->dev, GFP_KERNEL, "%s:rgb:indicator",
ps_dev->input_dev_name);
if (!led_cdev->name)
return -ENOMEM;
led_cdev->brightness = 255;
led_cdev->max_brightness = 255;
led_cdev->brightness_set_blocking = brightness_set;
ret = devm_led_classdev_multicolor_register(&hdev->dev, lightbar_mc_dev);
if (ret < 0) {
hid_err(hdev, "Cannot register multicolor LED device\n");
return ret;
}
return 0;
}
static struct input_dev *ps_sensors_create(struct hid_device *hdev, int accel_range, int accel_res,
int gyro_range, int gyro_res)
{
@ -614,15 +703,12 @@ static ssize_t hardware_version_show(struct device *dev,
static DEVICE_ATTR_RO(hardware_version);
static struct attribute *ps_device_attributes[] = {
static struct attribute *ps_device_attrs[] = {
&dev_attr_firmware_version.attr,
&dev_attr_hardware_version.attr,
NULL
};
static const struct attribute_group ps_device_attribute_group = {
.attrs = ps_device_attributes,
};
ATTRIBUTE_GROUPS(ps_device);
static int dualsense_get_calibration_data(struct dualsense *ds)
{
@ -714,6 +800,7 @@ static int dualsense_get_calibration_data(struct dualsense *ds)
return ret;
}
static int dualsense_get_firmware_info(struct dualsense *ds)
{
uint8_t *buf;
@ -733,6 +820,15 @@ static int dualsense_get_firmware_info(struct dualsense *ds)
ds->base.hw_version = get_unaligned_le32(&buf[24]);
ds->base.fw_version = get_unaligned_le32(&buf[28]);
/* Update version is some kind of feature version. It is distinct from
* the firmware version as there can be many different variations of a
* controller over time with the same physical shell, but with different
* PCBs and other internal changes. The update version (internal name) is
* used as a means to detect what features are available and change behavior.
* Note: the version is different between DualSense and DualSense Edge.
*/
ds->update_version = get_unaligned_le16(&buf[44]);
err_free:
kfree(buf);
return ret;
@ -761,6 +857,53 @@ static int dualsense_get_mac_address(struct dualsense *ds)
return ret;
}
static int dualsense_lightbar_set_brightness(struct led_classdev *cdev,
enum led_brightness brightness)
{
struct led_classdev_mc *mc_cdev = lcdev_to_mccdev(cdev);
struct dualsense *ds = container_of(mc_cdev, struct dualsense, lightbar);
uint8_t red, green, blue;
led_mc_calc_color_components(mc_cdev, brightness);
red = mc_cdev->subled_info[0].brightness;
green = mc_cdev->subled_info[1].brightness;
blue = mc_cdev->subled_info[2].brightness;
dualsense_set_lightbar(ds, red, green, blue);
return 0;
}
static enum led_brightness dualsense_player_led_get_brightness(struct led_classdev *led)
{
struct hid_device *hdev = to_hid_device(led->dev->parent);
struct dualsense *ds = hid_get_drvdata(hdev);
return !!(ds->player_leds_state & BIT(led - ds->player_leds));
}
static int dualsense_player_led_set_brightness(struct led_classdev *led, enum led_brightness value)
{
struct hid_device *hdev = to_hid_device(led->dev->parent);
struct dualsense *ds = hid_get_drvdata(hdev);
unsigned long flags;
unsigned int led_index;
spin_lock_irqsave(&ds->base.lock, flags);
led_index = led - ds->player_leds;
if (value == LED_OFF)
ds->player_leds_state &= ~BIT(led_index);
else
ds->player_leds_state |= BIT(led_index);
ds->update_player_leds = true;
spin_unlock_irqrestore(&ds->base.lock, flags);
dualsense_schedule_work(ds);
return 0;
}
static void dualsense_init_output_report(struct dualsense *ds, struct dualsense_output_report *rp,
void *buf)
{
@ -800,6 +943,16 @@ static void dualsense_init_output_report(struct dualsense *ds, struct dualsense_
}
}
static inline void dualsense_schedule_work(struct dualsense *ds)
{
unsigned long flags;
spin_lock_irqsave(&ds->base.lock, flags);
if (ds->output_worker_initialized)
schedule_work(&ds->output_worker);
spin_unlock_irqrestore(&ds->base.lock, flags);
}
/*
* Helper function to send DualSense output reports. Applies a CRC at the end of a report
* for Bluetooth reports.
@ -838,7 +991,10 @@ static void dualsense_output_worker(struct work_struct *work)
if (ds->update_rumble) {
/* Select classic rumble style haptics and enable it. */
common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_HAPTICS_SELECT;
common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
if (ds->use_vibration_v2)
common->valid_flag2 |= DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2;
else
common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION;
common->motor_left = ds->motor_left;
common->motor_right = ds->motor_right;
ds->update_rumble = false;
@ -960,7 +1116,7 @@ static int dualsense_parse_report(struct ps_device *ps_dev, struct hid_report *r
spin_unlock_irqrestore(&ps_dev->lock, flags);
/* Schedule updating of microphone state at hardware level. */
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
}
ds->last_btn_mic_state = btn_mic_state;
@ -1075,10 +1231,22 @@ static int dualsense_play_effect(struct input_dev *dev, void *data, struct ff_ef
ds->motor_right = effect->u.rumble.weak_magnitude / 256;
spin_unlock_irqrestore(&ds->base.lock, flags);
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
return 0;
}
static void dualsense_remove(struct ps_device *ps_dev)
{
struct dualsense *ds = container_of(ps_dev, struct dualsense, base);
unsigned long flags;
spin_lock_irqsave(&ds->base.lock, flags);
ds->output_worker_initialized = false;
spin_unlock_irqrestore(&ds->base.lock, flags);
cancel_work_sync(&ds->output_worker);
}
static int dualsense_reset_leds(struct dualsense *ds)
{
struct dualsense_output_report report;
@ -1106,12 +1274,16 @@ static int dualsense_reset_leds(struct dualsense *ds)
static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t green, uint8_t blue)
{
unsigned long flags;
spin_lock_irqsave(&ds->base.lock, flags);
ds->update_lightbar = true;
ds->lightbar_red = red;
ds->lightbar_green = green;
ds->lightbar_blue = blue;
spin_unlock_irqrestore(&ds->base.lock, flags);
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
}
static void dualsense_set_player_leds(struct dualsense *ds)
@ -1134,7 +1306,7 @@ static void dualsense_set_player_leds(struct dualsense *ds)
ds->update_player_leds = true;
ds->player_leds_state = player_ids[player_id];
schedule_work(&ds->output_worker);
dualsense_schedule_work(ds);
}
static struct ps_device *dualsense_create(struct hid_device *hdev)
@ -1142,7 +1314,20 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
struct dualsense *ds;
struct ps_device *ps_dev;
uint8_t max_output_report_size;
int ret;
int i, ret;
static const struct ps_led_info player_leds_info[] = {
{ LED_FUNCTION_PLAYER1, "white", dualsense_player_led_get_brightness,
dualsense_player_led_set_brightness },
{ LED_FUNCTION_PLAYER2, "white", dualsense_player_led_get_brightness,
dualsense_player_led_set_brightness },
{ LED_FUNCTION_PLAYER3, "white", dualsense_player_led_get_brightness,
dualsense_player_led_set_brightness },
{ LED_FUNCTION_PLAYER4, "white", dualsense_player_led_get_brightness,
dualsense_player_led_set_brightness },
{ LED_FUNCTION_PLAYER5, "white", dualsense_player_led_get_brightness,
dualsense_player_led_set_brightness }
};
ds = devm_kzalloc(&hdev->dev, sizeof(*ds), GFP_KERNEL);
if (!ds)
@ -1160,7 +1345,9 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
ps_dev->battery_capacity = 100; /* initial value until parse_report. */
ps_dev->battery_status = POWER_SUPPLY_STATUS_UNKNOWN;
ps_dev->parse_report = dualsense_parse_report;
ps_dev->remove = dualsense_remove;
INIT_WORK(&ds->output_worker, dualsense_output_worker);
ds->output_worker_initialized = true;
hid_set_drvdata(hdev, ds);
max_output_report_size = sizeof(struct dualsense_output_report_bt);
@ -1181,6 +1368,21 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
return ERR_PTR(ret);
}
/* Original DualSense firmware simulated classic controller rumble through
* its new haptics hardware. It felt different from classic rumble users
* were used to. Since then new firmwares were introduced to change behavior
* and make this new 'v2' behavior default on PlayStation and other platforms.
* The original DualSense requires a new enough firmware as bundled with PS5
* software released in 2021. DualSense edge supports it out of the box.
* Both devices also support the old mode, but it is not really used.
*/
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
/* Feature version 2.21 introduced new vibration method. */
ds->use_vibration_v2 = ds->update_version >= DS_FEATURE_VERSION(2, 21);
} else if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
ds->use_vibration_v2 = true;
}
ret = ps_devices_list_add(ps_dev);
if (ret)
return ERR_PTR(ret);
@ -1196,6 +1398,8 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
ret = PTR_ERR(ds->gamepad);
goto err;
}
/* Use gamepad input device name as primary device name for e.g. LEDs */
ps_dev->input_dev_name = dev_name(&ds->gamepad->dev);
ds->sensors = ps_sensors_create(hdev, DS_ACC_RANGE, DS_ACC_RES_PER_G,
DS_GYRO_RANGE, DS_GYRO_RES_PER_DEG_S);
@ -1223,8 +1427,21 @@ static struct ps_device *dualsense_create(struct hid_device *hdev)
if (ret)
goto err;
ret = ps_lightbar_register(ps_dev, &ds->lightbar, dualsense_lightbar_set_brightness);
if (ret)
goto err;
/* Set default lightbar color. */
dualsense_set_lightbar(ds, 0, 0, 128); /* blue */
for (i = 0; i < ARRAY_SIZE(player_leds_info); i++) {
const struct ps_led_info *led_info = &player_leds_info[i];
ret = ps_led_register(ps_dev, &ds->player_leds[i], led_info);
if (ret < 0)
goto err;
}
ret = ps_device_set_player_id(ps_dev);
if (ret) {
hid_err(hdev, "Failed to assign player id for DualSense: %d\n", ret);
@ -1282,7 +1499,8 @@ static int ps_probe(struct hid_device *hdev, const struct hid_device_id *id)
goto err_stop;
}
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) {
if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER ||
hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) {
dev = dualsense_create(hdev);
if (IS_ERR(dev)) {
hid_err(hdev, "Failed to create dualsense.\n");
@ -1291,12 +1509,6 @@ static int ps_probe(struct hid_device *hdev, const struct hid_device_id *id)
}
}
ret = devm_device_add_group(&hdev->dev, &ps_device_attribute_group);
if (ret) {
hid_err(hdev, "Failed to register sysfs nodes.\n");
goto err_close;
}
return ret;
err_close:
@ -1313,6 +1525,9 @@ static void ps_remove(struct hid_device *hdev)
ps_devices_list_remove(dev);
ps_device_release_player_id(dev);
if (dev->remove)
dev->remove(dev);
hid_hw_close(hdev);
hid_hw_stop(hdev);
}
@ -1320,6 +1535,8 @@ static void ps_remove(struct hid_device *hdev)
static const struct hid_device_id ps_devices[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) },
{ }
};
MODULE_DEVICE_TABLE(hid, ps_devices);
@ -1330,6 +1547,9 @@ static struct hid_driver ps_driver = {
.probe = ps_probe,
.remove = ps_remove,
.raw_event = ps_raw_event,
.driver = {
.dev_groups = ps_device_groups,
},
};
static int __init ps_init(void)

View File

@ -301,6 +301,7 @@ int dwc3_core_soft_reset(struct dwc3 *dwc)
udelay(1);
} while (--retries);
dev_warn(dwc->dev, "DWC3 controller soft reset failed.\n");
return -ETIMEDOUT;
done:

View File

@ -3647,6 +3647,9 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
struct buffer_head *bh;
if (!ext4_has_inline_data(inode)) {
struct ext4_dir_entry_2 *de;
unsigned int offset;
/* The first directory block must not be a hole, so
* treat it as DIRENT_HTREE
*/
@ -3655,9 +3658,30 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle,
*retval = PTR_ERR(bh);
return NULL;
}
*parent_de = ext4_next_entry(
(struct ext4_dir_entry_2 *)bh->b_data,
inode->i_sb->s_blocksize);
de = (struct ext4_dir_entry_2 *) bh->b_data;
if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
bh->b_size, 0, 0) ||
le32_to_cpu(de->inode) != inode->i_ino ||
strcmp(".", de->name)) {
EXT4_ERROR_INODE(inode, "directory missing '.'");
brelse(bh);
*retval = -EFSCORRUPTED;
return NULL;
}
offset = ext4_rec_len_from_disk(de->rec_len,
inode->i_sb->s_blocksize);
de = ext4_next_entry(de, inode->i_sb->s_blocksize);
if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data,
bh->b_size, 0, offset) ||
le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) {
EXT4_ERROR_INODE(inode, "directory missing '..'");
brelse(bh);
*retval = -EFSCORRUPTED;
return NULL;
}
*parent_de = de;
return bh;
}

View File

@ -1660,6 +1660,30 @@ void f2fs_put_page_dic(struct page *page)
f2fs_put_dic(dic);
}
/*
* check whether cluster blocks are contiguous, and add extent cache entry
* only if cluster blocks are logically and physically contiguous.
*/
unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn)
{
bool compressed = f2fs_data_blkaddr(dn) == COMPRESS_ADDR;
int i = compressed ? 1 : 0;
block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_page,
dn->ofs_in_node + i);
for (i += 1; i < F2FS_I(dn->inode)->i_cluster_size; i++) {
block_t blkaddr = data_blkaddr(dn->inode, dn->node_page,
dn->ofs_in_node + i);
if (!__is_valid_data_blkaddr(blkaddr))
break;
if (first_blkaddr + i - (compressed ? 1 : 0) != blkaddr)
return 0;
}
return compressed ? i - 1 : i;
}
const struct address_space_operations f2fs_compress_aops = {
.releasepage = f2fs_release_page,
.invalidatepage = f2fs_invalidate_page,

View File

@ -1085,7 +1085,7 @@ void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr)
{
dn->data_blkaddr = blkaddr;
f2fs_set_data_blkaddr(dn);
f2fs_update_extent_cache(dn);
f2fs_update_read_extent_cache(dn);
}
/* dn->ofs_in_node will be returned with up-to-date last block pointer */
@ -1151,10 +1151,10 @@ int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index)
int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index)
{
struct extent_info ei = {0, 0, 0};
struct extent_info ei = {0, };
struct inode *inode = dn->inode;
if (f2fs_lookup_extent_cache(inode, index, &ei)) {
if (f2fs_lookup_read_extent_cache(inode, index, &ei)) {
dn->data_blkaddr = ei.blk + index - ei.fofs;
return 0;
}
@ -1168,14 +1168,14 @@ struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index,
struct address_space *mapping = inode->i_mapping;
struct dnode_of_data dn;
struct page *page;
struct extent_info ei = {0,0,0};
struct extent_info ei = {0, };
int err;
page = f2fs_grab_cache_page(mapping, index, for_write);
if (!page)
return ERR_PTR(-ENOMEM);
if (f2fs_lookup_extent_cache(inode, index, &ei)) {
if (f2fs_lookup_read_extent_cache(inode, index, &ei)) {
dn.data_blkaddr = ei.blk + index - ei.fofs;
if (!f2fs_is_valid_blkaddr(F2FS_I_SB(inode), dn.data_blkaddr,
DATA_GENERIC_ENHANCE_READ)) {
@ -1466,7 +1466,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
int err = 0, ofs = 1;
unsigned int ofs_in_node, last_ofs_in_node;
blkcnt_t prealloc;
struct extent_info ei = {0,0,0};
struct extent_info ei = {0, };
block_t blkaddr;
unsigned int start_pgofs;
@ -1480,7 +1480,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
pgofs = (pgoff_t)map->m_lblk;
end = pgofs + maxblocks;
if (!create && f2fs_lookup_extent_cache(inode, pgofs, &ei)) {
if (!create && f2fs_lookup_read_extent_cache(inode, pgofs, &ei)) {
if (f2fs_lfs_mode(sbi) && flag == F2FS_GET_BLOCK_DIO &&
map->m_may_create)
goto next_dnode;
@ -1654,7 +1654,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
if (map->m_flags & F2FS_MAP_MAPPED) {
unsigned int ofs = start_pgofs - map->m_lblk;
f2fs_update_extent_cache_range(&dn,
f2fs_update_read_extent_cache_range(&dn,
start_pgofs, map->m_pblk + ofs,
map->m_len - ofs);
}
@ -1679,7 +1679,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
if (map->m_flags & F2FS_MAP_MAPPED) {
unsigned int ofs = start_pgofs - map->m_lblk;
f2fs_update_extent_cache_range(&dn,
f2fs_update_read_extent_cache_range(&dn,
start_pgofs, map->m_pblk + ofs,
map->m_len - ofs);
}
@ -2156,6 +2156,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
sector_t last_block_in_file;
const unsigned blocksize = blks_to_bytes(inode, 1);
struct decompress_io_ctx *dic = NULL;
struct extent_info ei = {};
bool from_dnode = true;
int i;
int ret = 0;
@ -2188,6 +2190,12 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
if (f2fs_cluster_is_empty(cc))
goto out;
if (f2fs_lookup_read_extent_cache(inode, start_idx, &ei))
from_dnode = false;
if (!from_dnode)
goto skip_reading_dnode;
set_new_dnode(&dn, inode, NULL, NULL, 0);
ret = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE);
if (ret)
@ -2195,11 +2203,13 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
f2fs_bug_on(sbi, dn.data_blkaddr != COMPRESS_ADDR);
skip_reading_dnode:
for (i = 1; i < cc->cluster_size; i++) {
block_t blkaddr;
blkaddr = data_blkaddr(dn.inode, dn.node_page,
dn.ofs_in_node + i);
blkaddr = from_dnode ? data_blkaddr(dn.inode, dn.node_page,
dn.ofs_in_node + i) :
ei.blk + i - 1;
if (!__is_valid_data_blkaddr(blkaddr))
break;
@ -2209,6 +2219,9 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
goto out_put_dnode;
}
cc->nr_cpages++;
if (!from_dnode && i >= ei.c_len)
break;
}
/* nothing to decompress */
@ -2228,8 +2241,9 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
block_t blkaddr;
struct bio_post_read_ctx *ctx;
blkaddr = data_blkaddr(dn.inode, dn.node_page,
dn.ofs_in_node + i + 1);
blkaddr = from_dnode ? data_blkaddr(dn.inode, dn.node_page,
dn.ofs_in_node + i + 1) :
ei.blk + i;
f2fs_wait_on_block_writeback(inode, blkaddr);
@ -2274,13 +2288,15 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
*last_block_in_bio = blkaddr;
}
f2fs_put_dnode(&dn);
if (from_dnode)
f2fs_put_dnode(&dn);
*bio_ret = bio;
return 0;
out_put_dnode:
f2fs_put_dnode(&dn);
if (from_dnode)
f2fs_put_dnode(&dn);
out:
for (i = 0; i < cc->cluster_size; i++) {
if (cc->rpages[i]) {
@ -2584,14 +2600,14 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
struct page *page = fio->page;
struct inode *inode = page->mapping->host;
struct dnode_of_data dn;
struct extent_info ei = {0,0,0};
struct extent_info ei = {0, };
struct node_info ni;
bool ipu_force = false;
int err = 0;
set_new_dnode(&dn, inode, NULL, NULL, 0);
if (need_inplace_update(fio) &&
f2fs_lookup_extent_cache(inode, page->index, &ei)) {
f2fs_lookup_read_extent_cache(inode, page->index, &ei)) {
fio->old_blkaddr = ei.blk + page->index - ei.fofs;
if (!f2fs_is_valid_blkaddr(fio->sbi, fio->old_blkaddr,
@ -3265,7 +3281,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
struct dnode_of_data dn;
struct page *ipage;
bool locked = false;
struct extent_info ei = {0,0,0};
struct extent_info ei = {0, };
int err = 0;
int flag;
@ -3316,7 +3332,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
} else if (locked) {
err = f2fs_get_block(&dn, index);
} else {
if (f2fs_lookup_extent_cache(inode, index, &ei)) {
if (f2fs_lookup_read_extent_cache(inode, index, &ei)) {
dn.data_blkaddr = ei.blk + index - ei.fofs;
} else {
/* hole case */

View File

@ -72,15 +72,26 @@ static void update_general_status(struct f2fs_sb_info *sbi)
si->main_area_zones = si->main_area_sections /
le32_to_cpu(raw_super->secs_per_zone);
/* validation check of the segment numbers */
/* general extent cache stats */
for (i = 0; i < NR_EXTENT_CACHES; i++) {
struct extent_tree_info *eti = &sbi->extent_tree[i];
si->hit_cached[i] = atomic64_read(&sbi->read_hit_cached[i]);
si->hit_rbtree[i] = atomic64_read(&sbi->read_hit_rbtree[i]);
si->total_ext[i] = atomic64_read(&sbi->total_hit_ext[i]);
si->hit_total[i] = si->hit_cached[i] + si->hit_rbtree[i];
si->ext_tree[i] = atomic_read(&eti->total_ext_tree);
si->zombie_tree[i] = atomic_read(&eti->total_zombie_tree);
si->ext_node[i] = atomic_read(&eti->total_ext_node);
}
/* read extent_cache only */
si->hit_largest = atomic64_read(&sbi->read_hit_largest);
si->hit_cached = atomic64_read(&sbi->read_hit_cached);
si->hit_rbtree = atomic64_read(&sbi->read_hit_rbtree);
si->hit_total = si->hit_largest + si->hit_cached + si->hit_rbtree;
si->total_ext = atomic64_read(&sbi->total_hit_ext);
si->ext_tree = atomic_read(&sbi->total_ext_tree);
si->zombie_tree = atomic_read(&sbi->total_zombie_tree);
si->ext_node = atomic_read(&sbi->total_ext_node);
si->hit_total[EX_READ] += si->hit_largest;
/* block age extent_cache only */
si->allocated_data_blocks = atomic64_read(&sbi->allocated_data_blocks);
/* validation check of the segment numbers */
si->ndirty_node = get_pages(sbi, F2FS_DIRTY_NODES);
si->ndirty_dent = get_pages(sbi, F2FS_DIRTY_DENTS);
si->ndirty_meta = get_pages(sbi, F2FS_DIRTY_META);
@ -299,10 +310,16 @@ static void update_mem_info(struct f2fs_sb_info *sbi)
si->cache_mem += si->inmem_pages * sizeof(struct inmem_pages);
for (i = 0; i < MAX_INO_ENTRY; i++)
si->cache_mem += sbi->im[i].ino_num * sizeof(struct ino_entry);
si->cache_mem += atomic_read(&sbi->total_ext_tree) *
for (i = 0; i < NR_EXTENT_CACHES; i++) {
struct extent_tree_info *eti = &sbi->extent_tree[i];
si->ext_mem[i] = atomic_read(&eti->total_ext_tree) *
sizeof(struct extent_tree);
si->cache_mem += atomic_read(&sbi->total_ext_node) *
si->ext_mem[i] += atomic_read(&eti->total_ext_node) *
sizeof(struct extent_node);
si->cache_mem += si->ext_mem[i];
}
si->page_mem = 0;
if (sbi->node_inode) {
@ -471,16 +488,34 @@ static int stat_show(struct seq_file *s, void *v)
si->skipped_atomic_files[BG_GC]);
seq_printf(s, "BG skip : IO: %u, Other: %u\n",
si->io_skip_bggc, si->other_skip_bggc);
seq_puts(s, "\nExtent Cache:\n");
seq_puts(s, "\nExtent Cache (Read):\n");
seq_printf(s, " - Hit Count: L1-1:%llu L1-2:%llu L2:%llu\n",
si->hit_largest, si->hit_cached,
si->hit_rbtree);
si->hit_largest, si->hit_cached[EX_READ],
si->hit_rbtree[EX_READ]);
seq_printf(s, " - Hit Ratio: %llu%% (%llu / %llu)\n",
!si->total_ext ? 0 :
div64_u64(si->hit_total * 100, si->total_ext),
si->hit_total, si->total_ext);
!si->total_ext[EX_READ] ? 0 :
div64_u64(si->hit_total[EX_READ] * 100,
si->total_ext[EX_READ]),
si->hit_total[EX_READ], si->total_ext[EX_READ]);
seq_printf(s, " - Inner Struct Count: tree: %d(%d), node: %d\n",
si->ext_tree, si->zombie_tree, si->ext_node);
si->ext_tree[EX_READ], si->zombie_tree[EX_READ],
si->ext_node[EX_READ]);
seq_puts(s, "\nExtent Cache (Block Age):\n");
seq_printf(s, " - Allocated Data Blocks: %llu\n",
si->allocated_data_blocks);
seq_printf(s, " - Hit Count: L1:%llu L2:%llu\n",
si->hit_cached[EX_BLOCK_AGE],
si->hit_rbtree[EX_BLOCK_AGE]);
seq_printf(s, " - Hit Ratio: %llu%% (%llu / %llu)\n",
!si->total_ext[EX_BLOCK_AGE] ? 0 :
div64_u64(si->hit_total[EX_BLOCK_AGE] * 100,
si->total_ext[EX_BLOCK_AGE]),
si->hit_total[EX_BLOCK_AGE],
si->total_ext[EX_BLOCK_AGE]);
seq_printf(s, " - Inner Struct Count: tree: %d(%d), node: %d\n",
si->ext_tree[EX_BLOCK_AGE],
si->zombie_tree[EX_BLOCK_AGE],
si->ext_node[EX_BLOCK_AGE]);
seq_puts(s, "\nBalancing F2FS Async:\n");
seq_printf(s, " - DIO (R: %4d, W: %4d)\n",
si->nr_dio_read, si->nr_dio_write);
@ -546,8 +581,12 @@ static int stat_show(struct seq_file *s, void *v)
(si->base_mem + si->cache_mem + si->page_mem) >> 10);
seq_printf(s, " - static: %llu KB\n",
si->base_mem >> 10);
seq_printf(s, " - cached: %llu KB\n",
seq_printf(s, " - cached all: %llu KB\n",
si->cache_mem >> 10);
seq_printf(s, " - read extent cache: %llu KB\n",
si->ext_mem[EX_READ] >> 10);
seq_printf(s, " - block age extent cache: %llu KB\n",
si->ext_mem[EX_BLOCK_AGE] >> 10);
seq_printf(s, " - paged : %llu KB\n",
si->page_mem >> 10);
}
@ -579,10 +618,15 @@ int f2fs_build_stats(struct f2fs_sb_info *sbi)
si->sbi = sbi;
sbi->stat_info = si;
atomic64_set(&sbi->total_hit_ext, 0);
atomic64_set(&sbi->read_hit_rbtree, 0);
/* general extent cache stats */
for (i = 0; i < NR_EXTENT_CACHES; i++) {
atomic64_set(&sbi->total_hit_ext[i], 0);
atomic64_set(&sbi->read_hit_rbtree[i], 0);
atomic64_set(&sbi->read_hit_cached[i], 0);
}
/* read extent_cache only */
atomic64_set(&sbi->read_hit_largest, 0);
atomic64_set(&sbi->read_hit_cached, 0);
atomic_set(&sbi->inline_xattr, 0);
atomic_set(&sbi->inline_inode, 0);

File diff suppressed because it is too large Load Diff

View File

@ -84,7 +84,7 @@ extern const char *f2fs_fault_name[FAULT_MAX];
#define F2FS_MOUNT_FLUSH_MERGE 0x00000400
#define F2FS_MOUNT_NOBARRIER 0x00000800
#define F2FS_MOUNT_FASTBOOT 0x00001000
#define F2FS_MOUNT_EXTENT_CACHE 0x00002000
#define F2FS_MOUNT_READ_EXTENT_CACHE 0x00002000
#define F2FS_MOUNT_DATA_FLUSH 0x00008000
#define F2FS_MOUNT_FAULT_INJECTION 0x00010000
#define F2FS_MOUNT_USRQUOTA 0x00080000
@ -99,6 +99,7 @@ extern const char *f2fs_fault_name[FAULT_MAX];
#define F2FS_MOUNT_MERGE_CHECKPOINT 0x10000000
#define F2FS_MOUNT_GC_MERGE 0x20000000
#define F2FS_MOUNT_COMPRESS_CACHE 0x40000000
#define F2FS_MOUNT_AGE_EXTENT_CACHE 0x80000000
#define F2FS_OPTION(sbi) ((sbi)->mount_opt)
#define clear_opt(sbi, option) (F2FS_OPTION(sbi).opt &= ~F2FS_MOUNT_##option)
@ -568,7 +569,26 @@ enum {
#define F2FS_MIN_EXTENT_LEN 64 /* minimum extent length */
/* number of extent info in extent cache we try to shrink */
#define EXTENT_CACHE_SHRINK_NUMBER 128
#define READ_EXTENT_CACHE_SHRINK_NUMBER 128
/* number of age extent info in extent cache we try to shrink */
#define AGE_EXTENT_CACHE_SHRINK_NUMBER 128
#define LAST_AGE_WEIGHT 30
#define SAME_AGE_REGION 1024
/*
* Define data block with age less than 1GB as hot data
* define data block with age less than 10GB but more than 1GB as warm data
*/
#define DEF_HOT_DATA_AGE_THRESHOLD 262144
#define DEF_WARM_DATA_AGE_THRESHOLD 2621440
/* extent cache type */
enum extent_type {
EX_READ,
EX_BLOCK_AGE,
NR_EXTENT_CACHES,
};
struct rb_entry {
struct rb_node rb_node; /* rb node located in rb-tree */
@ -584,7 +604,24 @@ struct rb_entry {
struct extent_info {
unsigned int fofs; /* start offset in a file */
unsigned int len; /* length of the extent */
u32 blk; /* start block address of the extent */
union {
/* read extent_cache */
struct {
/* start block address of the extent */
block_t blk;
#ifdef CONFIG_F2FS_FS_COMPRESSION
/* physical extent length of compressed blocks */
unsigned int c_len;
#endif
};
/* block age extent_cache */
struct {
/* block age of the extent */
unsigned long long age;
/* last total blocks allocated */
unsigned long long last_blocks;
};
};
};
struct extent_node {
@ -596,13 +633,25 @@ struct extent_node {
struct extent_tree {
nid_t ino; /* inode number */
enum extent_type type; /* keep the extent tree type */
struct rb_root_cached root; /* root of extent info rb-tree */
struct extent_node *cached_en; /* recently accessed extent node */
struct extent_info largest; /* largested extent info */
struct list_head list; /* to be used by sbi->zombie_list */
rwlock_t lock; /* protect extent info rb-tree */
atomic_t node_cnt; /* # of extent node in rb-tree*/
bool largest_updated; /* largest extent updated */
struct extent_info largest; /* largest cached extent for EX_READ */
};
struct extent_tree_info {
struct radix_tree_root extent_tree_root;/* cache extent cache entries */
struct mutex extent_tree_lock; /* locking extent radix tree */
struct list_head extent_list; /* lru list for shrinker */
spinlock_t extent_lock; /* locking extent lru list */
atomic_t total_ext_tree; /* extent tree count */
struct list_head zombie_list; /* extent zombie tree list */
atomic_t total_zombie_tree; /* extent zombie tree count */
atomic_t total_ext_node; /* extent info count */
};
/*
@ -761,7 +810,8 @@ struct f2fs_inode_info {
struct list_head inmem_pages; /* inmemory pages managed by f2fs */
struct task_struct *inmem_task; /* store inmemory task */
struct mutex inmem_lock; /* lock for inmemory pages */
struct extent_tree *extent_tree; /* cached extent_tree entry */
struct extent_tree *extent_tree[NR_EXTENT_CACHES];
/* cached extent_tree entry */
/* avoid racing between foreground op and gc */
struct f2fs_rwsem i_gc_rwsem[2];
@ -783,7 +833,7 @@ struct f2fs_inode_info {
unsigned int i_cluster_size; /* cluster size */
};
static inline void get_extent_info(struct extent_info *ext,
static inline void get_read_extent_info(struct extent_info *ext,
struct f2fs_extent *i_ext)
{
ext->fofs = le32_to_cpu(i_ext->fofs);
@ -791,7 +841,7 @@ static inline void get_extent_info(struct extent_info *ext,
ext->len = le32_to_cpu(i_ext->len);
}
static inline void set_raw_extent(struct extent_info *ext,
static inline void set_raw_read_extent(struct extent_info *ext,
struct f2fs_extent *i_ext)
{
i_ext->fofs = cpu_to_le32(ext->fofs);
@ -799,14 +849,6 @@ static inline void set_raw_extent(struct extent_info *ext,
i_ext->len = cpu_to_le32(ext->len);
}
static inline void set_extent_info(struct extent_info *ei, unsigned int fofs,
u32 blk, unsigned int len)
{
ei->fofs = fofs;
ei->blk = blk;
ei->len = len;
}
static inline bool __is_discard_mergeable(struct discard_info *back,
struct discard_info *front, unsigned int max_len)
{
@ -826,35 +868,6 @@ static inline bool __is_discard_front_mergeable(struct discard_info *cur,
return __is_discard_mergeable(cur, front, max_len);
}
static inline bool __is_extent_mergeable(struct extent_info *back,
struct extent_info *front)
{
return (back->fofs + back->len == front->fofs &&
back->blk + back->len == front->blk);
}
static inline bool __is_back_mergeable(struct extent_info *cur,
struct extent_info *back)
{
return __is_extent_mergeable(back, cur);
}
static inline bool __is_front_mergeable(struct extent_info *cur,
struct extent_info *front)
{
return __is_extent_mergeable(cur, front);
}
extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync);
static inline void __try_update_largest_extent(struct extent_tree *et,
struct extent_node *en)
{
if (en->ei.len > et->largest.len) {
et->largest = en->ei;
et->largest_updated = true;
}
}
/*
* For free nid management
*/
@ -1596,14 +1609,12 @@ struct f2fs_sb_info {
struct mutex flush_lock; /* for flush exclusion */
/* for extent tree cache */
struct radix_tree_root extent_tree_root;/* cache extent cache entries */
struct mutex extent_tree_lock; /* locking extent radix tree */
struct list_head extent_list; /* lru list for shrinker */
spinlock_t extent_lock; /* locking extent lru list */
atomic_t total_ext_tree; /* extent tree count */
struct list_head zombie_list; /* extent zombie tree list */
atomic_t total_zombie_tree; /* extent zombie tree count */
atomic_t total_ext_node; /* extent info count */
struct extent_tree_info extent_tree[NR_EXTENT_CACHES];
atomic64_t allocated_data_blocks; /* for block age extent_cache */
/* The threshold used for hot and warm data seperation*/
unsigned int hot_data_age_threshold;
unsigned int warm_data_age_threshold;
/* basic filesystem units */
unsigned int log_sectors_per_block; /* log2 sectors per block */
@ -1684,10 +1695,14 @@ struct f2fs_sb_info {
unsigned int segment_count[2]; /* # of allocated segments */
unsigned int block_count[2]; /* # of allocated blocks */
atomic_t inplace_count; /* # of inplace update */
atomic64_t total_hit_ext; /* # of lookup extent cache */
atomic64_t read_hit_rbtree; /* # of hit rbtree extent node */
atomic64_t read_hit_largest; /* # of hit largest extent node */
atomic64_t read_hit_cached; /* # of hit cached extent node */
/* # of lookup extent cache */
atomic64_t total_hit_ext[NR_EXTENT_CACHES];
/* # of hit rbtree extent node */
atomic64_t read_hit_rbtree[NR_EXTENT_CACHES];
/* # of hit cached extent node */
atomic64_t read_hit_cached[NR_EXTENT_CACHES];
/* # of hit largest extent node in read extent cache */
atomic64_t read_hit_largest;
atomic_t inline_xattr; /* # of inline_xattr inodes */
atomic_t inline_inode; /* # of inline_data inodes */
atomic_t inline_dir; /* # of inline_dentry inodes */
@ -2480,6 +2495,7 @@ static inline block_t __start_sum_addr(struct f2fs_sb_info *sbi)
return le32_to_cpu(F2FS_CKPT(sbi)->cp_pack_start_sum);
}
extern void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync);
static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
struct inode *inode, bool is_inode)
{
@ -3769,9 +3785,19 @@ struct f2fs_stat_info {
struct f2fs_sb_info *sbi;
int all_area_segs, sit_area_segs, nat_area_segs, ssa_area_segs;
int main_area_segs, main_area_sections, main_area_zones;
unsigned long long hit_largest, hit_cached, hit_rbtree;
unsigned long long hit_total, total_ext;
int ext_tree, zombie_tree, ext_node;
unsigned long long hit_cached[NR_EXTENT_CACHES];
unsigned long long hit_rbtree[NR_EXTENT_CACHES];
unsigned long long total_ext[NR_EXTENT_CACHES];
unsigned long long hit_total[NR_EXTENT_CACHES];
int ext_tree[NR_EXTENT_CACHES];
int zombie_tree[NR_EXTENT_CACHES];
int ext_node[NR_EXTENT_CACHES];
/* to count memory footprint */
unsigned long long ext_mem[NR_EXTENT_CACHES];
/* for read extent cache */
unsigned long long hit_largest;
/* for block age extent cache */
unsigned long long allocated_data_blocks;
int ndirty_node, ndirty_dent, ndirty_meta, ndirty_imeta;
int ndirty_data, ndirty_qdata;
int inmem_pages;
@ -3832,10 +3858,10 @@ static inline struct f2fs_stat_info *F2FS_STAT(struct f2fs_sb_info *sbi)
#define stat_other_skip_bggc_count(sbi) ((sbi)->other_skip_bggc++)
#define stat_inc_dirty_inode(sbi, type) ((sbi)->ndirty_inode[type]++)
#define stat_dec_dirty_inode(sbi, type) ((sbi)->ndirty_inode[type]--)
#define stat_inc_total_hit(sbi) (atomic64_inc(&(sbi)->total_hit_ext))
#define stat_inc_rbtree_node_hit(sbi) (atomic64_inc(&(sbi)->read_hit_rbtree))
#define stat_inc_total_hit(sbi, type) (atomic64_inc(&(sbi)->total_hit_ext[type]))
#define stat_inc_rbtree_node_hit(sbi, type) (atomic64_inc(&(sbi)->read_hit_rbtree[type]))
#define stat_inc_largest_node_hit(sbi) (atomic64_inc(&(sbi)->read_hit_largest))
#define stat_inc_cached_node_hit(sbi) (atomic64_inc(&(sbi)->read_hit_cached))
#define stat_inc_cached_node_hit(sbi, type) (atomic64_inc(&(sbi)->read_hit_cached[type]))
#define stat_inc_inline_xattr(inode) \
do { \
if (f2fs_has_inline_xattr(inode)) \
@ -3961,10 +3987,10 @@ void f2fs_update_sit_info(struct f2fs_sb_info *sbi);
#define stat_other_skip_bggc_count(sbi) do { } while (0)
#define stat_inc_dirty_inode(sbi, type) do { } while (0)
#define stat_dec_dirty_inode(sbi, type) do { } while (0)
#define stat_inc_total_hit(sbi) do { } while (0)
#define stat_inc_rbtree_node_hit(sbi) do { } while (0)
#define stat_inc_total_hit(sbi, type) do { } while (0)
#define stat_inc_rbtree_node_hit(sbi, type) do { } while (0)
#define stat_inc_largest_node_hit(sbi) do { } while (0)
#define stat_inc_cached_node_hit(sbi) do { } while (0)
#define stat_inc_cached_node_hit(sbi, type) do { } while (0)
#define stat_inc_inline_xattr(inode) do { } while (0)
#define stat_dec_inline_xattr(inode) do { } while (0)
#define stat_inc_inline_inode(inode) do { } while (0)
@ -4069,20 +4095,34 @@ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root,
bool force, bool *leftmost);
bool f2fs_check_rb_tree_consistence(struct f2fs_sb_info *sbi,
struct rb_root_cached *root, bool check_key);
unsigned int f2fs_shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink);
void f2fs_init_extent_tree(struct inode *inode, struct page *ipage);
void f2fs_init_extent_tree(struct inode *inode);
void f2fs_drop_extent_tree(struct inode *inode);
unsigned int f2fs_destroy_extent_node(struct inode *inode);
void f2fs_destroy_extent_node(struct inode *inode);
void f2fs_destroy_extent_tree(struct inode *inode);
bool f2fs_lookup_extent_cache(struct inode *inode, pgoff_t pgofs,
struct extent_info *ei);
void f2fs_update_extent_cache(struct dnode_of_data *dn);
void f2fs_update_extent_cache_range(struct dnode_of_data *dn,
pgoff_t fofs, block_t blkaddr, unsigned int len);
void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi);
int __init f2fs_create_extent_cache(void);
void f2fs_destroy_extent_cache(void);
/* read extent cache ops */
void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage);
bool f2fs_lookup_read_extent_cache(struct inode *inode, pgoff_t pgofs,
struct extent_info *ei);
void f2fs_update_read_extent_cache(struct dnode_of_data *dn);
void f2fs_update_read_extent_cache_range(struct dnode_of_data *dn,
pgoff_t fofs, block_t blkaddr, unsigned int len);
unsigned int f2fs_shrink_read_extent_tree(struct f2fs_sb_info *sbi,
int nr_shrink);
/* block age extent cache ops */
void f2fs_init_age_extent_tree(struct inode *inode);
bool f2fs_lookup_age_extent_cache(struct inode *inode, pgoff_t pgofs,
struct extent_info *ei);
void f2fs_update_age_extent_cache(struct dnode_of_data *dn);
void f2fs_update_age_extent_cache_range(struct dnode_of_data *dn,
pgoff_t fofs, unsigned int len);
unsigned int f2fs_shrink_age_extent_tree(struct f2fs_sb_info *sbi,
int nr_shrink);
/*
* sysfs.c
*/
@ -4146,12 +4186,16 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
struct writeback_control *wbc,
enum iostat_type io_type);
int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index);
void f2fs_update_read_extent_tree_range_compressed(struct inode *inode,
pgoff_t fofs, block_t blkaddr,
unsigned int llen, unsigned int c_len);
int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
unsigned nr_pages, sector_t *last_block_in_bio,
bool is_readahead, bool for_write);
struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc);
void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed);
void f2fs_put_page_dic(struct page *page);
unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn);
int f2fs_init_compress_ctx(struct compress_ctx *cc);
void f2fs_destroy_compress_ctx(struct compress_ctx *cc, bool reuse);
void f2fs_init_compress_info(struct f2fs_sb_info *sbi);
@ -4206,6 +4250,7 @@ static inline void f2fs_put_page_dic(struct page *page)
{
WARN_ON_ONCE(1);
}
static inline unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn) { return 0; }
static inline int f2fs_init_compress_inode(struct f2fs_sb_info *sbi) { return 0; }
static inline void f2fs_destroy_compress_inode(struct f2fs_sb_info *sbi) { }
static inline int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { return 0; }
@ -4221,6 +4266,10 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
nid_t ino) { }
#define inc_compr_inode_stat(inode) do { } while (0)
static inline void f2fs_update_read_extent_tree_range_compressed(
struct inode *inode,
pgoff_t fofs, block_t blkaddr,
unsigned int llen, unsigned int c_len) { }
#endif
static inline int set_compress_context(struct inode *inode)
@ -4290,26 +4339,6 @@ F2FS_FEATURE_FUNCS(casefold, CASEFOLD);
F2FS_FEATURE_FUNCS(compression, COMPRESSION);
F2FS_FEATURE_FUNCS(readonly, RO);
static inline bool f2fs_may_extent_tree(struct inode *inode)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
if (!test_opt(sbi, EXTENT_CACHE) ||
is_inode_flag_set(inode, FI_NO_EXTENT) ||
(is_inode_flag_set(inode, FI_COMPRESSED_FILE) &&
!f2fs_sb_has_readonly(sbi)))
return false;
/*
* for recovered files during mount do not create extents
* if shrinker is not registered.
*/
if (list_empty(&sbi->s_list))
return false;
return S_ISREG(inode->i_mode);
}
#ifdef CONFIG_BLK_DEV_ZONED
static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
block_t blkaddr)

View File

@ -607,7 +607,8 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
*/
fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page),
dn->inode) + ofs;
f2fs_update_extent_cache_range(dn, fofs, 0, len);
f2fs_update_read_extent_cache_range(dn, fofs, 0, len);
f2fs_update_age_extent_cache_range(dn, fofs, nr_free);
dec_valid_block_count(sbi, dn->inode, nr_free);
}
dn->ofs_in_node = ofs;
@ -1430,7 +1431,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
f2fs_set_data_blkaddr(dn);
}
f2fs_update_extent_cache_range(dn, start, 0, index - start);
f2fs_update_read_extent_cache_range(dn, start, 0, index - start);
return ret;
}
@ -2590,7 +2591,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
struct f2fs_map_blocks map = { .m_next_extent = NULL,
.m_seg_type = NO_CHECK_TYPE,
.m_may_create = false };
struct extent_info ei = {0, 0, 0};
struct extent_info ei = {};
pgoff_t pg_start, pg_end, next_pgofs;
unsigned int blk_per_seg = sbi->blocks_per_seg;
unsigned int total = 0, sec_num;
@ -2622,7 +2623,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
* lookup mapping info in extent cache, skip defragmenting if physical
* block addresses are continuous.
*/
if (f2fs_lookup_extent_cache(inode, pg_start, &ei)) {
if (f2fs_lookup_read_extent_cache(inode, pg_start, &ei)) {
if (ei.fofs + ei.len >= pg_end)
goto out;
}

View File

@ -1054,7 +1054,7 @@ static int ra_data_block(struct inode *inode, pgoff_t index)
struct address_space *mapping = inode->i_mapping;
struct dnode_of_data dn;
struct page *page;
struct extent_info ei = {0, 0, 0};
struct extent_info ei = {0, };
struct f2fs_io_info fio = {
.sbi = sbi,
.ino = inode->i_ino,
@ -1072,7 +1072,7 @@ static int ra_data_block(struct inode *inode, pgoff_t index)
if (!page)
return -ENOMEM;
if (f2fs_lookup_extent_cache(inode, index, &ei)) {
if (f2fs_lookup_read_extent_cache(inode, index, &ei)) {
dn.data_blkaddr = ei.blk + index - ei.fofs;
if (unlikely(!f2fs_is_valid_blkaddr(sbi, dn.data_blkaddr,
DATA_GENERIC_ENHANCE_READ))) {

View File

@ -260,8 +260,8 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
return false;
}
if (F2FS_I(inode)->extent_tree) {
struct extent_info *ei = &F2FS_I(inode)->extent_tree->largest;
if (fi->extent_tree[EX_READ]) {
struct extent_info *ei = &fi->extent_tree[EX_READ]->largest;
if (ei->len &&
(!f2fs_is_valid_blkaddr(sbi, ei->blk,
@ -380,8 +380,6 @@ static int do_read_inode(struct inode *inode)
fi->i_pino = le32_to_cpu(ri->i_pino);
fi->i_dir_level = ri->i_dir_level;
f2fs_init_extent_tree(inode, node_page);
get_inline_info(inode, ri);
fi->i_extra_isize = f2fs_has_extra_attr(inode) ?
@ -469,6 +467,11 @@ static int do_read_inode(struct inode *inode)
F2FS_I(inode)->i_disk_time[1] = inode->i_ctime;
F2FS_I(inode)->i_disk_time[2] = inode->i_mtime;
F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime;
/* Need all the flag bits */
f2fs_init_read_extent_tree(inode, node_page);
f2fs_init_age_extent_tree(inode);
f2fs_put_page(node_page, 1);
stat_inc_inline_xattr(inode);
@ -571,7 +574,7 @@ struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino)
void f2fs_update_inode(struct inode *inode, struct page *node_page)
{
struct f2fs_inode *ri;
struct extent_tree *et = F2FS_I(inode)->extent_tree;
struct extent_tree *et = F2FS_I(inode)->extent_tree[EX_READ];
f2fs_wait_on_page_writeback(node_page, NODE, true, true);
set_page_dirty(node_page);
@ -590,7 +593,7 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
if (et) {
read_lock(&et->lock);
set_raw_extent(&et->largest, &ri->i_ext);
set_raw_read_extent(&et->largest, &ri->i_ext);
read_unlock(&et->lock);
} else {
memset(&ri->i_ext, 0, sizeof(ri->i_ext));

View File

@ -105,8 +105,6 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
}
F2FS_I(inode)->i_inline_xattr_size = xattr_size;
f2fs_init_extent_tree(inode, NULL);
F2FS_I(inode)->i_flags =
f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);
@ -133,6 +131,8 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
f2fs_set_inode_flags(inode);
f2fs_init_extent_tree(inode);
trace_f2fs_new_inode(inode, 0);
return inode;

View File

@ -58,7 +58,7 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type)
avail_ram = val.totalram - val.totalhigh;
/*
* give 25%, 25%, 50%, 50%, 50% memory for each components respectively
* give 25%, 25%, 50%, 50%, 25%, 25% memory for each components respectively
*/
if (type == FREE_NIDS) {
mem_size = (nm_i->nid_cnt[FREE_NID] *
@ -83,12 +83,16 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type)
sizeof(struct ino_entry);
mem_size >>= PAGE_SHIFT;
res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 1);
} else if (type == EXTENT_CACHE) {
mem_size = (atomic_read(&sbi->total_ext_tree) *
} else if (type == READ_EXTENT_CACHE || type == AGE_EXTENT_CACHE) {
enum extent_type etype = type == READ_EXTENT_CACHE ?
EX_READ : EX_BLOCK_AGE;
struct extent_tree_info *eti = &sbi->extent_tree[etype];
mem_size = (atomic_read(&eti->total_ext_tree) *
sizeof(struct extent_tree) +
atomic_read(&sbi->total_ext_node) *
atomic_read(&eti->total_ext_node) *
sizeof(struct extent_node)) >> PAGE_SHIFT;
res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 1);
res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2);
} else if (type == INMEM_PAGES) {
/* it allows 20% / total_ram for inmemory pages */
mem_size = get_pages(sbi, F2FS_INMEM_PAGES);
@ -846,6 +850,26 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
dn->ofs_in_node = offset[level];
dn->node_page = npage[level];
dn->data_blkaddr = f2fs_data_blkaddr(dn);
if (is_inode_flag_set(dn->inode, FI_COMPRESSED_FILE) &&
f2fs_sb_has_readonly(sbi)) {
unsigned int c_len = f2fs_cluster_blocks_are_contiguous(dn);
block_t blkaddr;
if (!c_len)
goto out;
blkaddr = f2fs_data_blkaddr(dn);
if (blkaddr == COMPRESS_ADDR)
blkaddr = data_blkaddr(dn->inode, dn->node_page,
dn->ofs_in_node + 1);
f2fs_update_read_extent_tree_range_compressed(dn->inode,
index, blkaddr,
F2FS_I(dn->inode)->i_cluster_size,
c_len);
}
out:
return 0;
release_pages:

View File

@ -148,7 +148,8 @@ enum mem_type {
NAT_ENTRIES, /* indicates the cached nat entry */
DIRTY_DENTS, /* indicates dirty dentry pages */
INO_ENTRIES, /* indicates inode entries */
EXTENT_CACHE, /* indicates extent cache */
READ_EXTENT_CACHE, /* indicates read extent cache */
AGE_EXTENT_CACHE, /* indicates age extent cache */
INMEM_PAGES, /* indicates inmemory pages */
DISCARD_CACHE, /* indicates memory of cached discard cmds */
COMPRESS_PAGE, /* indicates memory of cached compressed pages */

View File

@ -536,8 +536,14 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi, bool from_bg)
return;
/* try to shrink extent cache when there is no enough memory */
if (!f2fs_available_free_memory(sbi, EXTENT_CACHE))
f2fs_shrink_extent_tree(sbi, EXTENT_CACHE_SHRINK_NUMBER);
if (!f2fs_available_free_memory(sbi, READ_EXTENT_CACHE))
f2fs_shrink_read_extent_tree(sbi,
READ_EXTENT_CACHE_SHRINK_NUMBER);
/* try to shrink age extent cache when there is no enough memory */
if (!f2fs_available_free_memory(sbi, AGE_EXTENT_CACHE))
f2fs_shrink_age_extent_tree(sbi,
AGE_EXTENT_CACHE_SHRINK_NUMBER);
/* check the # of cached NAT entries */
if (!f2fs_available_free_memory(sbi, NAT_ENTRIES))
@ -3292,10 +3298,28 @@ static int __get_segment_type_4(struct f2fs_io_info *fio)
}
}
static int __get_age_segment_type(struct inode *inode, pgoff_t pgofs)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct extent_info ei = {};
if (f2fs_lookup_age_extent_cache(inode, pgofs, &ei)) {
if (!ei.age)
return NO_CHECK_TYPE;
if (ei.age <= sbi->hot_data_age_threshold)
return CURSEG_HOT_DATA;
if (ei.age <= sbi->warm_data_age_threshold)
return CURSEG_WARM_DATA;
return CURSEG_COLD_DATA;
}
return NO_CHECK_TYPE;
}
static int __get_segment_type_6(struct f2fs_io_info *fio)
{
if (fio->type == DATA) {
struct inode *inode = fio->page->mapping->host;
int type;
if (is_inode_flag_set(inode, FI_ALIGNED_WRITE))
return CURSEG_COLD_DATA_PINNED;
@ -3310,6 +3334,11 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
}
if (file_is_cold(inode) || f2fs_need_compress_data(inode))
return CURSEG_COLD_DATA;
type = __get_age_segment_type(inode, fio->page->index);
if (type != NO_CHECK_TYPE)
return type;
if (file_is_hot(inode) ||
is_inode_flag_set(inode, FI_HOT_DATA) ||
f2fs_is_atomic_file(inode) ||
@ -3421,6 +3450,9 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
locate_dirty_segment(sbi, GET_SEGNO(sbi, old_blkaddr));
locate_dirty_segment(sbi, GET_SEGNO(sbi, *new_blkaddr));
if (IS_DATASEG(type))
atomic64_inc(&sbi->allocated_data_blocks);
up_write(&sit_i->sentry_lock);
if (page && IS_NODESEG(type)) {
@ -3542,6 +3574,8 @@ void f2fs_outplace_write_data(struct dnode_of_data *dn,
struct f2fs_summary sum;
f2fs_bug_on(sbi, dn->data_blkaddr == NULL_ADDR);
if (fio->io_type == FS_DATA_IO || fio->io_type == FS_CP_DATA_IO)
f2fs_update_age_extent_cache(dn);
set_summary(&sum, dn->nid, dn->ofs_in_node, fio->version);
do_write_page(&sum, fio);
f2fs_update_data_blkaddr(dn, fio->new_blkaddr);

View File

@ -28,10 +28,13 @@ static unsigned long __count_free_nids(struct f2fs_sb_info *sbi)
return count > 0 ? count : 0;
}
static unsigned long __count_extent_cache(struct f2fs_sb_info *sbi)
static unsigned long __count_extent_cache(struct f2fs_sb_info *sbi,
enum extent_type type)
{
return atomic_read(&sbi->total_zombie_tree) +
atomic_read(&sbi->total_ext_node);
struct extent_tree_info *eti = &sbi->extent_tree[type];
return atomic_read(&eti->total_zombie_tree) +
atomic_read(&eti->total_ext_node);
}
unsigned long f2fs_shrink_count(struct shrinker *shrink,
@ -53,8 +56,11 @@ unsigned long f2fs_shrink_count(struct shrinker *shrink,
}
spin_unlock(&f2fs_list_lock);
/* count extent cache entries */
count += __count_extent_cache(sbi);
/* count read extent cache entries */
count += __count_extent_cache(sbi, EX_READ);
/* count block age extent cache entries */
count += __count_extent_cache(sbi, EX_BLOCK_AGE);
/* count clean nat cache entries */
count += __count_nat_entries(sbi);
@ -100,7 +106,10 @@ unsigned long f2fs_shrink_scan(struct shrinker *shrink,
sbi->shrinker_run_no = run_no;
/* shrink extent cache entries */
freed += f2fs_shrink_extent_tree(sbi, nr >> 1);
freed += f2fs_shrink_age_extent_tree(sbi, nr >> 2);
/* shrink read extent cache entries */
freed += f2fs_shrink_read_extent_tree(sbi, nr >> 2);
/* shrink clean nat cache entries */
if (freed < nr)
@ -130,7 +139,9 @@ void f2fs_join_shrinker(struct f2fs_sb_info *sbi)
void f2fs_leave_shrinker(struct f2fs_sb_info *sbi)
{
f2fs_shrink_extent_tree(sbi, __count_extent_cache(sbi));
f2fs_shrink_read_extent_tree(sbi, __count_extent_cache(sbi, EX_READ));
f2fs_shrink_age_extent_tree(sbi,
__count_extent_cache(sbi, EX_BLOCK_AGE));
spin_lock(&f2fs_list_lock);
list_del_init(&sbi->s_list);

View File

@ -154,6 +154,7 @@ enum {
Opt_atgc,
Opt_gc_merge,
Opt_nogc_merge,
Opt_age_extent_cache,
Opt_err,
};
@ -229,6 +230,7 @@ static match_table_t f2fs_tokens = {
{Opt_atgc, "atgc"},
{Opt_gc_merge, "gc_merge"},
{Opt_nogc_merge, "nogc_merge"},
{Opt_age_extent_cache, "age_extent_cache"},
{Opt_err, NULL},
};
@ -753,10 +755,10 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
set_opt(sbi, FASTBOOT);
break;
case Opt_extent_cache:
set_opt(sbi, EXTENT_CACHE);
set_opt(sbi, READ_EXTENT_CACHE);
break;
case Opt_noextent_cache:
clear_opt(sbi, EXTENT_CACHE);
clear_opt(sbi, READ_EXTENT_CACHE);
break;
case Opt_noinline_data:
clear_opt(sbi, INLINE_DATA);
@ -1148,6 +1150,9 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
case Opt_nogc_merge:
clear_opt(sbi, GC_MERGE);
break;
case Opt_age_extent_cache:
set_opt(sbi, AGE_EXTENT_CACHE);
break;
default:
f2fs_err(sbi, "Unrecognized mount option \"%s\" or missing value",
p);
@ -1817,10 +1822,12 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
seq_puts(seq, ",nobarrier");
if (test_opt(sbi, FASTBOOT))
seq_puts(seq, ",fastboot");
if (test_opt(sbi, EXTENT_CACHE))
if (test_opt(sbi, READ_EXTENT_CACHE))
seq_puts(seq, ",extent_cache");
else
seq_puts(seq, ",noextent_cache");
if (test_opt(sbi, AGE_EXTENT_CACHE))
seq_puts(seq, ",age_extent_cache");
if (test_opt(sbi, DATA_FLUSH))
seq_puts(seq, ",data_flush");
@ -1922,7 +1929,7 @@ static void default_options(struct f2fs_sb_info *sbi)
set_opt(sbi, INLINE_XATTR);
set_opt(sbi, INLINE_DATA);
set_opt(sbi, INLINE_DENTRY);
set_opt(sbi, EXTENT_CACHE);
set_opt(sbi, READ_EXTENT_CACHE);
set_opt(sbi, NOHEAP);
clear_opt(sbi, DISABLE_CHECKPOINT);
set_opt(sbi, MERGE_CHECKPOINT);
@ -2042,7 +2049,8 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
bool need_restart_gc = false, need_stop_gc = false;
bool need_restart_ckpt = false, need_stop_ckpt = false;
bool need_restart_flush = false, need_stop_flush = false;
bool no_extent_cache = !test_opt(sbi, EXTENT_CACHE);
bool no_read_extent_cache = !test_opt(sbi, READ_EXTENT_CACHE);
bool no_age_extent_cache = !test_opt(sbi, AGE_EXTENT_CACHE);
bool disable_checkpoint = test_opt(sbi, DISABLE_CHECKPOINT);
bool no_io_align = !F2FS_IO_ALIGNED(sbi);
bool no_atgc = !test_opt(sbi, ATGC);
@ -2132,11 +2140,17 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
}
/* disallow enable/disable extent_cache dynamically */
if (no_extent_cache == !!test_opt(sbi, EXTENT_CACHE)) {
if (no_read_extent_cache == !!test_opt(sbi, READ_EXTENT_CACHE)) {
err = -EINVAL;
f2fs_warn(sbi, "switch extent_cache option is not allowed");
goto restore_opts;
}
/* disallow enable/disable age extent_cache dynamically */
if (no_age_extent_cache == !!test_opt(sbi, AGE_EXTENT_CACHE)) {
err = -EINVAL;
f2fs_warn(sbi, "switch age_extent_cache option is not allowed");
goto restore_opts;
}
if (no_io_align == !!F2FS_IO_ALIGNED(sbi)) {
err = -EINVAL;

View File

@ -549,6 +549,24 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
return count;
}
if (!strcmp(a->attr.name, "hot_data_age_threshold")) {
if (t == 0 || t >= sbi->warm_data_age_threshold)
return -EINVAL;
if (t == *ui)
return count;
*ui = (unsigned int)t;
return count;
}
if (!strcmp(a->attr.name, "warm_data_age_threshold")) {
if (t == 0 || t <= sbi->hot_data_age_threshold)
return -EINVAL;
if (t == *ui)
return count;
*ui = (unsigned int)t;
return count;
}
*ui = (unsigned int)t;
return count;
@ -778,6 +796,10 @@ F2FS_RW_ATTR(ATGC_INFO, atgc_management, atgc_age_threshold, age_threshold);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_segment_mode, gc_segment_mode);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_reclaimed_segments, gc_reclaimed_segs);
/* For block age extent cache */
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, hot_data_age_threshold, hot_data_age_threshold);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, warm_data_age_threshold, warm_data_age_threshold);
#define ATTR_LIST(name) (&f2fs_attr_##name.attr)
static struct attribute *f2fs_attrs[] = {
ATTR_LIST(gc_urgent_sleep_time),
@ -853,6 +875,8 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(atgc_age_threshold),
ATTR_LIST(gc_segment_mode),
ATTR_LIST(gc_reclaimed_segments),
ATTR_LIST(hot_data_age_threshold),
ATTR_LIST(warm_data_age_threshold),
NULL,
};
ATTRIBUTE_GROUPS(f2fs);

View File

@ -208,10 +208,13 @@ static unsigned int fuse_req_hash(u64 unique)
/**
* A new request is available, wake fiq->waitq
*/
static void fuse_dev_wake_and_unlock(struct fuse_iqueue *fiq)
static void fuse_dev_wake_and_unlock(struct fuse_iqueue *fiq, bool sync)
__releases(fiq->lock)
{
wake_up(&fiq->waitq);
if (sync)
wake_up_sync(&fiq->waitq);
else
wake_up(&fiq->waitq);
kill_fasync(&fiq->fasync, SIGIO, POLL_IN);
spin_unlock(&fiq->lock);
}
@ -224,14 +227,14 @@ const struct fuse_iqueue_ops fuse_dev_fiq_ops = {
EXPORT_SYMBOL_GPL(fuse_dev_fiq_ops);
static void queue_request_and_unlock(struct fuse_iqueue *fiq,
struct fuse_req *req)
struct fuse_req *req, bool sync)
__releases(fiq->lock)
{
req->in.h.len = sizeof(struct fuse_in_header) +
fuse_len_args(req->args->in_numargs,
(struct fuse_arg *) req->args->in_args);
list_add_tail(&req->list, &fiq->pending);
fiq->ops->wake_pending_and_unlock(fiq);
fiq->ops->wake_pending_and_unlock(fiq, sync);
}
void fuse_queue_forget(struct fuse_conn *fc, struct fuse_forget_link *forget,
@ -246,7 +249,7 @@ void fuse_queue_forget(struct fuse_conn *fc, struct fuse_forget_link *forget,
if (fiq->connected) {
fiq->forget_list_tail->next = forget;
fiq->forget_list_tail = forget;
fiq->ops->wake_forget_and_unlock(fiq);
fiq->ops->wake_forget_and_unlock(fiq, false);
} else {
kfree(forget);
spin_unlock(&fiq->lock);
@ -266,7 +269,7 @@ static void flush_bg_queue(struct fuse_conn *fc)
fc->active_background++;
spin_lock(&fiq->lock);
req->in.h.unique = fuse_get_unique(fiq);
queue_request_and_unlock(fiq, req);
queue_request_and_unlock(fiq, req, false);
}
}
@ -359,7 +362,7 @@ static int queue_interrupt(struct fuse_req *req)
spin_unlock(&fiq->lock);
return 0;
}
fiq->ops->wake_interrupt_and_unlock(fiq);
fiq->ops->wake_interrupt_and_unlock(fiq, false);
} else {
spin_unlock(&fiq->lock);
}
@ -426,7 +429,7 @@ static void __fuse_request_send(struct fuse_req *req)
/* acquire extra reference, since request is still needed
after fuse_request_end() */
__fuse_get_request(req);
queue_request_and_unlock(fiq, req);
queue_request_and_unlock(fiq, req, true);
request_wait_answer(req);
/* Pairs with smp_wmb() in fuse_request_end() */
@ -601,7 +604,7 @@ static int fuse_simple_notify_reply(struct fuse_mount *fm,
spin_lock(&fiq->lock);
if (fiq->connected) {
queue_request_and_unlock(fiq, req);
queue_request_and_unlock(fiq, req, false);
} else {
err = -ENODEV;
spin_unlock(&fiq->lock);

View File

@ -412,19 +412,19 @@ struct fuse_iqueue_ops {
/**
* Signal that a forget has been queued
*/
void (*wake_forget_and_unlock)(struct fuse_iqueue *fiq)
void (*wake_forget_and_unlock)(struct fuse_iqueue *fiq, bool sync)
__releases(fiq->lock);
/**
* Signal that an INTERRUPT request has been queued
*/
void (*wake_interrupt_and_unlock)(struct fuse_iqueue *fiq)
void (*wake_interrupt_and_unlock)(struct fuse_iqueue *fiq, bool sync)
__releases(fiq->lock);
/**
* Signal that a request has been queued
*/
void (*wake_pending_and_unlock)(struct fuse_iqueue *fiq)
void (*wake_pending_and_unlock)(struct fuse_iqueue *fiq, bool sync)
__releases(fiq->lock);
/**

View File

@ -971,7 +971,7 @@ static struct virtio_driver virtio_fs_driver = {
#endif
};
static void virtio_fs_wake_forget_and_unlock(struct fuse_iqueue *fiq)
static void virtio_fs_wake_forget_and_unlock(struct fuse_iqueue *fiq, bool sync)
__releases(fiq->lock)
{
struct fuse_forget_link *link;
@ -1006,7 +1006,8 @@ __releases(fiq->lock)
kfree(link);
}
static void virtio_fs_wake_interrupt_and_unlock(struct fuse_iqueue *fiq)
static void virtio_fs_wake_interrupt_and_unlock(struct fuse_iqueue *fiq,
bool sync)
__releases(fiq->lock)
{
/*
@ -1221,7 +1222,8 @@ static int virtio_fs_enqueue_req(struct virtio_fs_vq *fsvq,
return ret;
}
static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq,
bool sync)
__releases(fiq->lock)
{
unsigned int queue_id = VQ_REQUEST; /* TODO multiqueue */

View File

@ -935,7 +935,7 @@ static const struct io_op_def io_op_defs[] = {
.needs_file = 1,
.hash_reg_file = 1,
.unbound_nonreg_file = 1,
.work_flags = IO_WQ_WORK_BLKCG,
.work_flags = IO_WQ_WORK_BLKCG | IO_WQ_WORK_FILES,
},
[IORING_OP_PROVIDE_BUFFERS] = {},
[IORING_OP_REMOVE_BUFFERS] = {},
@ -9029,7 +9029,7 @@ static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
if (unlikely(ctx->sqo_dead)) {
ret = -EOWNERDEAD;
goto out;
break;
}
if (!io_sqring_full(ctx))
@ -9039,7 +9039,6 @@ static int io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
} while (!signal_pending(current));
finish_wait(&ctx->sqo_sq_wait, &wait);
out:
return ret;
}

View File

@ -60,6 +60,13 @@
#define LED_FUNCTION_MICMUTE "micmute"
#define LED_FUNCTION_MUTE "mute"
/* Used for player LEDs as found on game controllers from e.g. Nintendo, Sony. */
#define LED_FUNCTION_PLAYER1 "player-1"
#define LED_FUNCTION_PLAYER2 "player-2"
#define LED_FUNCTION_PLAYER3 "player-3"
#define LED_FUNCTION_PLAYER4 "player-4"
#define LED_FUNCTION_PLAYER5 "player-5"
/* Miscelleaus functions. Use functions above if you can. */
#define LED_FUNCTION_ACTIVITY "activity"
#define LED_FUNCTION_ALARM "alarm"

View File

@ -219,6 +219,7 @@ void __wake_up_pollfree(struct wait_queue_head *wq_head);
#define wake_up_interruptible_nr(x, nr) __wake_up(x, TASK_INTERRUPTIBLE, nr, NULL)
#define wake_up_interruptible_all(x) __wake_up(x, TASK_INTERRUPTIBLE, 0, NULL)
#define wake_up_interruptible_sync(x) __wake_up_sync((x), TASK_INTERRUPTIBLE)
#define wake_up_sync(x) __wake_up_sync((x), TASK_NORMAL)
/*
* Wakeup macros to be used to report events to the targets.

View File

@ -52,6 +52,8 @@ TRACE_DEFINE_ENUM(CP_DISCARD);
TRACE_DEFINE_ENUM(CP_TRIMMED);
TRACE_DEFINE_ENUM(CP_PAUSE);
TRACE_DEFINE_ENUM(CP_RESIZE);
TRACE_DEFINE_ENUM(EX_READ);
TRACE_DEFINE_ENUM(EX_BLOCK_AGE);
#define show_block_type(type) \
__print_symbolic(type, \
@ -162,6 +164,11 @@ TRACE_DEFINE_ENUM(CP_RESIZE);
{ COMPRESS_ZSTD, "ZSTD" }, \
{ COMPRESS_LZORLE, "LZO-RLE" })
#define show_extent_type(type) \
__print_symbolic(type, \
{ EX_READ, "Read" }, \
{ EX_BLOCK_AGE, "Block Age" })
struct f2fs_sb_info;
struct f2fs_io_info;
struct extent_info;
@ -1526,28 +1533,31 @@ TRACE_EVENT(f2fs_issue_flush,
TRACE_EVENT(f2fs_lookup_extent_tree_start,
TP_PROTO(struct inode *inode, unsigned int pgofs),
TP_PROTO(struct inode *inode, unsigned int pgofs, enum extent_type type),
TP_ARGS(inode, pgofs),
TP_ARGS(inode, pgofs, type),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(ino_t, ino)
__field(unsigned int, pgofs)
__field(enum extent_type, type)
),
TP_fast_assign(
__entry->dev = inode->i_sb->s_dev;
__entry->ino = inode->i_ino;
__entry->pgofs = pgofs;
__entry->type = type;
),
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u",
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, type = %s",
show_dev_ino(__entry),
__entry->pgofs)
__entry->pgofs,
show_extent_type(__entry->type))
);
TRACE_EVENT_CONDITION(f2fs_lookup_extent_tree_end,
TRACE_EVENT_CONDITION(f2fs_lookup_read_extent_tree_end,
TP_PROTO(struct inode *inode, unsigned int pgofs,
struct extent_info *ei),
@ -1561,8 +1571,8 @@ TRACE_EVENT_CONDITION(f2fs_lookup_extent_tree_end,
__field(ino_t, ino)
__field(unsigned int, pgofs)
__field(unsigned int, fofs)
__field(u32, blk)
__field(unsigned int, len)
__field(u32, blk)
),
TP_fast_assign(
@ -1570,25 +1580,65 @@ TRACE_EVENT_CONDITION(f2fs_lookup_extent_tree_end,
__entry->ino = inode->i_ino;
__entry->pgofs = pgofs;
__entry->fofs = ei->fofs;
__entry->blk = ei->blk;
__entry->len = ei->len;
__entry->blk = ei->blk;
),
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, "
"ext_info(fofs: %u, blk: %u, len: %u)",
"read_ext_info(fofs: %u, len: %u, blk: %u)",
show_dev_ino(__entry),
__entry->pgofs,
__entry->fofs,
__entry->blk,
__entry->len)
__entry->len,
__entry->blk)
);
TRACE_EVENT(f2fs_update_extent_tree_range,
TRACE_EVENT_CONDITION(f2fs_lookup_age_extent_tree_end,
TP_PROTO(struct inode *inode, unsigned int pgofs, block_t blkaddr,
unsigned int len),
TP_PROTO(struct inode *inode, unsigned int pgofs,
struct extent_info *ei),
TP_ARGS(inode, pgofs, blkaddr, len),
TP_ARGS(inode, pgofs, ei),
TP_CONDITION(ei),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(ino_t, ino)
__field(unsigned int, pgofs)
__field(unsigned int, fofs)
__field(unsigned int, len)
__field(unsigned long long, age)
__field(unsigned long long, blocks)
),
TP_fast_assign(
__entry->dev = inode->i_sb->s_dev;
__entry->ino = inode->i_ino;
__entry->pgofs = pgofs;
__entry->fofs = ei->fofs;
__entry->len = ei->len;
__entry->age = ei->age;
__entry->blocks = ei->last_blocks;
),
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, "
"age_ext_info(fofs: %u, len: %u, age: %llu, blocks: %llu)",
show_dev_ino(__entry),
__entry->pgofs,
__entry->fofs,
__entry->len,
__entry->age,
__entry->blocks)
);
TRACE_EVENT(f2fs_update_read_extent_tree_range,
TP_PROTO(struct inode *inode, unsigned int pgofs, unsigned int len,
block_t blkaddr,
unsigned int c_len),
TP_ARGS(inode, pgofs, len, blkaddr, c_len),
TP_STRUCT__entry(
__field(dev_t, dev)
@ -1596,70 +1646,115 @@ TRACE_EVENT(f2fs_update_extent_tree_range,
__field(unsigned int, pgofs)
__field(u32, blk)
__field(unsigned int, len)
__field(unsigned int, c_len)
),
TP_fast_assign(
__entry->dev = inode->i_sb->s_dev;
__entry->ino = inode->i_ino;
__entry->pgofs = pgofs;
__entry->blk = blkaddr;
__entry->len = len;
__entry->blk = blkaddr;
__entry->c_len = c_len;
),
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, "
"blkaddr = %u, len = %u",
"len = %u, blkaddr = %u, c_len = %u",
show_dev_ino(__entry),
__entry->pgofs,
__entry->len,
__entry->blk,
__entry->len)
__entry->c_len)
);
TRACE_EVENT(f2fs_update_age_extent_tree_range,
TP_PROTO(struct inode *inode, unsigned int pgofs, unsigned int len,
unsigned long long age,
unsigned long long last_blks),
TP_ARGS(inode, pgofs, len, age, last_blks),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(ino_t, ino)
__field(unsigned int, pgofs)
__field(unsigned int, len)
__field(unsigned long long, age)
__field(unsigned long long, blocks)
),
TP_fast_assign(
__entry->dev = inode->i_sb->s_dev;
__entry->ino = inode->i_ino;
__entry->pgofs = pgofs;
__entry->len = len;
__entry->age = age;
__entry->blocks = last_blks;
),
TP_printk("dev = (%d,%d), ino = %lu, pgofs = %u, "
"len = %u, age = %llu, blocks = %llu",
show_dev_ino(__entry),
__entry->pgofs,
__entry->len,
__entry->age,
__entry->blocks)
);
TRACE_EVENT(f2fs_shrink_extent_tree,
TP_PROTO(struct f2fs_sb_info *sbi, unsigned int node_cnt,
unsigned int tree_cnt),
unsigned int tree_cnt, enum extent_type type),
TP_ARGS(sbi, node_cnt, tree_cnt),
TP_ARGS(sbi, node_cnt, tree_cnt, type),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(unsigned int, node_cnt)
__field(unsigned int, tree_cnt)
__field(enum extent_type, type)
),
TP_fast_assign(
__entry->dev = sbi->sb->s_dev;
__entry->node_cnt = node_cnt;
__entry->tree_cnt = tree_cnt;
__entry->type = type;
),
TP_printk("dev = (%d,%d), shrunk: node_cnt = %u, tree_cnt = %u",
TP_printk("dev = (%d,%d), shrunk: node_cnt = %u, tree_cnt = %u, type = %s",
show_dev(__entry->dev),
__entry->node_cnt,
__entry->tree_cnt)
__entry->tree_cnt,
show_extent_type(__entry->type))
);
TRACE_EVENT(f2fs_destroy_extent_tree,
TP_PROTO(struct inode *inode, unsigned int node_cnt),
TP_PROTO(struct inode *inode, unsigned int node_cnt,
enum extent_type type),
TP_ARGS(inode, node_cnt),
TP_ARGS(inode, node_cnt, type),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(ino_t, ino)
__field(unsigned int, node_cnt)
__field(enum extent_type, type)
),
TP_fast_assign(
__entry->dev = inode->i_sb->s_dev;
__entry->ino = inode->i_ino;
__entry->node_cnt = node_cnt;
__entry->type = type;
),
TP_printk("dev = (%d,%d), ino = %lu, destroyed: node_cnt = %u",
TP_printk("dev = (%d,%d), ino = %lu, destroyed: node_cnt = %u, type = %s",
show_dev_ino(__entry),
__entry->node_cnt)
__entry->node_cnt,
show_extent_type(__entry->type))
);
DECLARE_EVENT_CLASS(f2fs_sync_dirty_inodes,

View File

@ -287,6 +287,14 @@ DECLARE_HOOK(android_vh_set_shmem_page_flag,
DECLARE_HOOK(android_vh_remove_vmalloc_stack,
TP_PROTO(struct vm_struct *vm),
TP_ARGS(vm));
DECLARE_HOOK(android_vh_alloc_pages_reclaim_bypass,
TP_PROTO(gfp_t gfp_mask, int order, int alloc_flags,
int migratetype, struct page **page),
TP_ARGS(gfp_mask, order, alloc_flags, migratetype, page));
DECLARE_HOOK(android_vh_alloc_pages_failure_bypass,
TP_PROTO(gfp_t gfp_mask, int order, int alloc_flags,
int migratetype, struct page **page),
TP_ARGS(gfp_mask, order, alloc_flags, migratetype, page));
DECLARE_HOOK(android_vh_test_clear_look_around_ref,
TP_PROTO(struct page *page),
TP_ARGS(page));

View File

@ -391,6 +391,10 @@ DECLARE_HOOK(android_vh_setscheduler_uclamp,
TP_PROTO(struct task_struct *tsk, int clamp_id, unsigned int value),
TP_ARGS(tsk, clamp_id, value));
DECLARE_HOOK(android_vh_mmput,
TP_PROTO(void *unused),
TP_ARGS(unused));
DECLARE_HOOK(android_vh_sched_pelt_multiplier,
TP_PROTO(unsigned int old, unsigned int cur, int *ret),
TP_ARGS(old, cur, ret));

View File

@ -131,6 +131,20 @@ config COMPILE_TEST
here. If you are a user/distributor, say N here to exclude useless
drivers to be distributed.
config WERROR
bool "Compile the kernel with warnings as errors"
default y
help
A kernel build should not cause any compiler warnings, and this
enables the '-Werror' flag to enforce that rule by default.
However, if you have a new (or very old) compiler with odd and
unusual warnings, or you have some architecture with problems,
you may need to disable this config option in order to
successfully build the kernel.
If in doubt, say Y.
config UAPI_HEADER_TEST
bool "Compile test UAPI headers"
depends on HEADERS_INSTALL && CC_CAN_LINK

View File

@ -1150,8 +1150,10 @@ void mmput(struct mm_struct *mm)
{
might_sleep();
if (atomic_dec_and_test(&mm->mm_users))
if (atomic_dec_and_test(&mm->mm_users)) {
trace_android_vh_mmput(NULL);
__mmput(mm);
}
}
EXPORT_SYMBOL_GPL(mmput);

View File

@ -298,6 +298,7 @@ config FRAME_WARN
int "Warn for stack frames larger than"
range 0 8192
default 2048 if GCC_PLUGIN_LATENT_ENTROPY
default 1280 if KASAN && !64BIT
default 1280 if (!64BIT && PARISC)
default 1024 if (!64BIT && !PARISC)
default 2048 if 64BIT

View File

@ -2421,8 +2421,22 @@ struct vm_area_struct *get_vma(struct mm_struct *mm, unsigned long addr)
read_lock(&mm->mm_rb_lock);
vma = __find_vma(mm, addr);
if (vma)
atomic_inc(&vma->vm_ref_count);
/*
* If there is a concurrent fast mremap, bail out since the entire
* PMD/PUD subtree may have been remapped.
*
* This is usually safe for conventional mremap since it takes the
* PTE locks as does SPF. However fast mremap only takes the lock
* at the PMD/PUD level which is ok as it is done with the mmap
* write lock held. But since SPF, as the term implies forgoes,
* taking the mmap read lock and also cannot take PTL lock at the
* larger PMD/PUD granualrity, since it would introduce huge
* contention in the page fault path; fall back to regular fault
* handling.
*/
if (vma && !atomic_inc_unless_negative(&vma->vm_ref_count))
vma = NULL;
read_unlock(&mm->mm_rb_lock);
return vma;

View File

@ -210,11 +210,39 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
drop_rmap_locks(vma);
}
#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
static inline bool trylock_vma_ref_count(struct vm_area_struct *vma)
{
/*
* If we have the only reference, swap the refcount to -1. This
* will prevent other concurrent references by get_vma() for SPFs.
*/
return atomic_cmpxchg(&vma->vm_ref_count, 1, -1) == 1;
}
/*
* Speculative page fault handlers will not detect page table changes done
* without ptl locking.
* Restore the VMA reference count to 1 after a fast mremap.
*/
#if defined(CONFIG_HAVE_MOVE_PMD) && !defined(CONFIG_SPECULATIVE_PAGE_FAULT)
static inline void unlock_vma_ref_count(struct vm_area_struct *vma)
{
/*
* This should only be called after a corresponding,
* successful trylock_vma_ref_count().
*/
VM_BUG_ON_VMA(atomic_cmpxchg(&vma->vm_ref_count, -1, 1) != -1,
vma);
}
#else /* !CONFIG_SPECULATIVE_PAGE_FAULT */
static inline bool trylock_vma_ref_count(struct vm_area_struct *vma)
{
return true;
}
static inline void unlock_vma_ref_count(struct vm_area_struct *vma)
{
}
#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */
#ifdef CONFIG_HAVE_MOVE_PMD
static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)
{
@ -248,6 +276,14 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
if (WARN_ON_ONCE(!pmd_none(*new_pmd)))
return false;
/*
* We hold both exclusive mmap_lock and rmap_lock at this point and
* cannot block. If we cannot immediately take exclusive ownership
* of the VMA fallback to the move_ptes().
*/
if (!trylock_vma_ref_count(vma))
return false;
/*
* We don't have to worry about the ordering of src and dst
* ptlocks because exclusive mmap_lock prevents deadlock.
@ -270,6 +306,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
spin_unlock(new_ptl);
spin_unlock(old_ptl);
unlock_vma_ref_count(vma);
return true;
}
#else
@ -281,11 +318,7 @@ static inline bool move_normal_pmd(struct vm_area_struct *vma,
}
#endif
/*
* Speculative page fault handlers will not detect page table changes done
* without ptl locking.
*/
#if defined(CONFIG_HAVE_MOVE_PUD) && !defined(CONFIG_SPECULATIVE_PAGE_FAULT)
#ifdef CONFIG_HAVE_MOVE_PUD
static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,
unsigned long new_addr, pud_t *old_pud, pud_t *new_pud)
{
@ -300,6 +333,14 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,
if (WARN_ON_ONCE(!pud_none(*new_pud)))
return false;
/*
* We hold both exclusive mmap_lock and rmap_lock at this point and
* cannot block. If we cannot immediately take exclusive ownership
* of the VMA fallback to the move_ptes().
*/
if (!trylock_vma_ref_count(vma))
return false;
/*
* We don't have to worry about the ordering of src and dst
* ptlocks because exclusive mmap_lock prevents deadlock.
@ -322,6 +363,7 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,
spin_unlock(new_ptl);
spin_unlock(old_ptl);
unlock_vma_ref_count(vma);
return true;
}
#else

View File

@ -4956,6 +4956,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
if (current->flags & PF_MEMALLOC)
goto nopage;
trace_android_vh_alloc_pages_reclaim_bypass(gfp_mask, order,
alloc_flags, ac->migratetype, &page);
if (page)
goto got_pg;
/* Try direct reclaim and then allocating */
page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
&did_some_progress);
@ -5071,6 +5077,11 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
goto retry;
}
fail:
trace_android_vh_alloc_pages_failure_bypass(gfp_mask, order,
alloc_flags, ac->migratetype, &page);
if (page)
goto got_pg;
warn_alloc(gfp_mask, ac->nodemask,
"page allocation failure: order:%u", order);
got_pg:

View File

@ -24,6 +24,7 @@
#include <errno.h>
#include <fcntl.h>
#include <getopt.h>
#include <limits.h>
#include <linux/if_alg.h>
#include <stdarg.h>
@ -45,6 +46,8 @@
* ---------------------------------------------------------------------------*/
#define ARRAY_SIZE(A) (sizeof(A) / sizeof((A)[0]))
#define MIN(a, b) ((a) < (b) ? (a) : (b))
#define MAX(a, b) ((a) > (b) ? (a) : (b))
static void __attribute__((noreturn))
do_die(const char *format, va_list va, int err)
@ -109,6 +112,23 @@ static const char *bytes_to_hex(const uint8_t *bytes, size_t count)
return hex;
}
static void full_write(int fd, const void *buf, size_t count)
{
while (count) {
ssize_t ret = write(fd, buf, count);
if (ret < 0)
die_errno("write failed");
buf += ret;
count -= ret;
}
}
enum {
OPT_AMOUNT,
OPT_ITERATIONS,
};
static void usage(void);
/* ---------------------------------------------------------------------------
@ -226,6 +246,68 @@ static int get_req_fd(int alg_fd, const char *alg_name)
return req_fd;
}
/* ---------------------------------------------------------------------------
* dump_jitterentropy command
* ---------------------------------------------------------------------------*/
static void dump_from_jent_fd(int fd, size_t count)
{
uint8_t buf[AF_ALG_MAX_RNG_REQUEST_SIZE];
while (count) {
ssize_t ret;
memset(buf, 0, sizeof(buf));
ret = read(fd, buf, MIN(count, sizeof(buf)));
if (ret < 0)
die_errno("error reading from jitterentropy_rng");
full_write(STDOUT_FILENO, buf, ret);
count -= ret;
}
}
static int cmd_dump_jitterentropy(int argc, char *argv[])
{
static const struct option longopts[] = {
{ "amount", required_argument, NULL, OPT_AMOUNT },
{ "iterations", required_argument, NULL, OPT_ITERATIONS },
{ NULL, 0, NULL, 0 },
};
size_t amount = 128;
size_t iterations = 1;
size_t i;
int c;
while ((c = getopt_long(argc, argv, "", longopts, NULL)) != -1) {
switch (c) {
case OPT_AMOUNT:
amount = strtoul(optarg, NULL, 0);
if (amount <= 0 || amount >= ULONG_MAX)
die("invalid argument to --amount");
break;
case OPT_ITERATIONS:
iterations = strtoul(optarg, NULL, 0);
if (iterations <= 0 || iterations >= ULONG_MAX)
die("invalid argument to --iterations");
break;
default:
usage();
return 1;
}
}
for (i = 0; i < iterations; i++) {
int alg_fd = get_alg_fd("rng", "jitterentropy_rng");
int req_fd = get_req_fd(alg_fd, "jitterentropy_rng");
dump_from_jent_fd(req_fd, amount);
close(req_fd);
close(alg_fd);
}
return 0;
}
/* ---------------------------------------------------------------------------
* show_invalid_inputs command
* ---------------------------------------------------------------------------*/
@ -510,6 +592,7 @@ static const struct command {
const char *name;
int (*func)(int argc, char *argv[]);
} commands[] = {
{ "dump_jitterentropy", cmd_dump_jitterentropy },
{ "show_invalid_inputs", cmd_show_invalid_inputs },
{ "show_module_version", cmd_show_module_version },
{ "show_service_indicators", cmd_show_service_indicators },
@ -519,9 +602,14 @@ static void usage(void)
{
fprintf(stderr,
"Usage:\n"
" fips140_lab_util dump_jitterentropy [OPTION]...\n"
" fips140_lab_util show_invalid_inputs\n"
" fips140_lab_util show_module_version\n"
" fips140_lab_util show_service_indicators [SERVICE]...\n"
"\n"
"Options for dump_jitterentropy:\n"
" --amount=AMOUNT Amount to dump in bytes per iteration (default 128)\n"
" --iterations=COUNT Number of start-up iterations (default 1)\n"
);
}