Merge "Merge keystone/android12-5.10-keystone-qcom-release.43+ (05a2a29) into msm-5.10"

This commit is contained in:
qctecmdr 2021-06-17 16:40:19 -07:00 committed by Gerrit - the friendly Code Review server
commit 18fa64cad1
621 changed files with 23878 additions and 17984 deletions

View File

@ -438,3 +438,31 @@ Description: Show the count of inode newly enabled for compression since mount.
Note that when the compression is disabled for the files, this count
doesn't decrease. If you write "0" here, you can initialize
compr_new_inode to "0".
What: /sys/fs/f2fs/<disk>/atgc_candidate_ratio
Date: May 2021
Contact: "Chao Yu" <yuchao0@huawei.com>
Description: When ATGC is on, it controls candidate ratio in order to limit total
number of potential victim in all candidates, the value should be in
range of [0, 100], by default it was initialized as 20(%).
What: /sys/fs/f2fs/<disk>/atgc_candidate_count
Date: May 2021
Contact: "Chao Yu" <yuchao0@huawei.com>
Description: When ATGC is on, it controls candidate count in order to limit total
number of potential victim in all candidates, by default it was
initialized as 10 (sections).
What: /sys/fs/f2fs/<disk>/atgc_age_weight
Date: May 2021
Contact: "Chao Yu" <yuchao0@huawei.com>
Description: When ATGC is on, it controls age weight to balance weight proportion
in between aging and valid blocks, the value should be in range of
[0, 100], by default it was initialized as 60(%).
What: /sys/fs/f2fs/<disk>/atgc_age_threshold
Date: May 2021
Contact: "Chao Yu" <yuchao0@huawei.com>
Description: When ATGC is on, it controls age threshold to bypass GCing young
candidates whose age is not beyond the threshold, by default it was
initialized as 604800 seconds (equals to 7 days).

View File

@ -89,13 +89,35 @@ you can use ``+=`` operator. For example::
In this case, the key ``foo`` has ``bar``, ``baz`` and ``qux``.
However, a sub-key and a value can not co-exist under a parent key.
For example, following config is NOT allowed.::
Moreover, sub-keys and a value can coexist under a parent key.
For example, following config is allowed.::
foo = value1
foo.bar = value2 # !ERROR! subkey "bar" and value "value1" can NOT co-exist
foo.bar := value2 # !ERROR! even with the override operator, this is NOT allowed.
foo.bar = value2
foo := value3 # This will update foo's value.
Note, since there is no syntax to put a raw value directly under a
structured key, you have to define it outside of the brace. For example::
foo {
bar = value1
bar {
baz = value2
qux = value3
}
}
Also, the order of the value node under a key is fixed. If there
are a value and subkeys, the value is always the first child node
of the key. Thus if user specifies subkeys first, e.g.::
foo.bar = value1
foo = value2
In the program (and /proc/bootconfig), it will be shown as below::
foo = value2
foo.bar = value1
Comments
--------

View File

@ -4687,10 +4687,6 @@
(that will set all pages holding image data
during restoration read-only).
reap_mem_when_killed_by=
The name of a process, the kill signal from which to a process
make its memory reaped with oom reaper.
retain_initrd [RAM] Keep initrd memory after extraction
rfkill.default_state=

View File

@ -77,7 +77,8 @@ events, except page fault notifications, may be generated:
- ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates that the kernel supports
``UFFDIO_REGISTER_MODE_MINOR`` registration for hugetlbfs virtual memory
areas.
areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
support for shmem virtual memory areas.
The userland application should set the feature flags it intends to use
when invoking the ``UFFDIO_API`` ioctl, to request that those features be

View File

@ -20,3 +20,74 @@ Colorimetry Control IDs
The Colorimetry class descriptor. Calling
:ref:`VIDIOC_QUERYCTRL` for this control will
return a description of this control class.
``V4L2_CID_COLORIMETRY_HDR10_CLL_INFO (struct)``
The Content Light Level defines upper bounds for the nominal target
brightness light level of the pictures.
.. c:type:: v4l2_ctrl_hdr10_cll_info
.. cssclass:: longtable
.. flat-table:: struct v4l2_ctrl_hdr10_cll_info
:header-rows: 0
:stub-columns: 0
:widths: 1 1 2
* - __u16
- ``max_content_light_level``
- The upper bound for the maximum light level among all individual
samples for the pictures of a video sequence, cd/m\ :sup:`2`.
When equal to 0 no such upper bound is present.
* - __u16
- ``max_pic_average_light_level``
- The upper bound for the maximum average light level among the
samples for any individual picture of a video sequence,
cd/m\ :sup:`2`. When equal to 0 no such upper bound is present.
``V4L2_CID_COLORIMETRY_HDR10_MASTERING_DISPLAY (struct)``
The mastering display defines the color volume (the color primaries,
white point and luminance range) of a display considered to be the
mastering display for the current video content.
.. c:type:: v4l2_ctrl_hdr10_mastering_display
.. cssclass:: longtable
.. flat-table:: struct v4l2_ctrl_hdr10_mastering_display
:header-rows: 0
:stub-columns: 0
:widths: 1 1 2
* - __u16
- ``display_primaries_x[3]``
- Specifies the normalized x chromaticity coordinate of the color
primary component c of the mastering display in increments of 0.00002.
For describing the mastering display that uses Red, Green and Blue
color primaries, index value c equal to 0 corresponds to the Green
primary, c equal to 1 corresponds to Blue primary and c equal to 2
corresponds to the Red color primary.
* - __u16
- ``display_primaries_y[3]``
- Specifies the normalized y chromaticity coordinate of the color
primary component c of the mastering display in increments of 0.00002.
For describing the mastering display that uses Red, Green and Blue
color primaries, index value c equal to 0 corresponds to the Green
primary, c equal to 1 corresponds to Blue primary and c equal to 2
corresponds to Red color primary.
* - __u16
- ``white_point_x``
- Specifies the normalized x chromaticity coordinate of the white
point of the mastering display in increments of 0.00002.
* - __u16
- ``white_point_y``
- Specifies the normalized y chromaticity coordinate of the white
point of the mastering display in increments of 0.00002.
* - __u32
- ``max_luminance``
- Specifies the nominal maximum display luminance of the mastering
display in units of 0.0001 cd/m\ :sup:`2`.
* - __u32
- ``min_luminance``
- specifies the nominal minimum display luminance of the mastering
display in units of 0.0001 cd/m\ :sup:`2`.

View File

@ -184,6 +184,14 @@ still cause this situation.
- ``p_area``
- A pointer to a struct :c:type:`v4l2_area`. Valid if this control is
of type ``V4L2_CTRL_TYPE_AREA``.
* - struct :c:type:`v4l2_ctrl_hdr10_cll_info` *
- ``p_hdr10_cll``
- A pointer to a struct :c:type:`v4l2_ctrl_hdr10_cll_info`. Valid if this control is
of type ``V4L2_CTRL_TYPE_HDR10_CLL_INFO``.
* - struct :c:type:`v4l2_ctrl_hdr10_mastering_display` *
- ``p_hdr10_mastering``
- A pointer to a struct :c:type:`v4l2_ctrl_hdr10_mastering_display`. Valid if this control is
of type ``V4L2_CTRL_TYPE_HDR10_MASTERING_DISPLAY``.
* - void *
- ``ptr``
- A pointer to a compound type which can be an N-dimensional array

View File

@ -145,6 +145,8 @@ replace symbol V4L2_CTRL_TYPE_HEVC_SPS :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_HEVC_PPS :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_HEVC_SLICE_PARAMS :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_AREA :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_HDR10_CLL_INFO :c:type:`v4l2_ctrl_type`
replace symbol V4L2_CTRL_TYPE_HDR10_MASTERING_DISPLAY :c:type:`v4l2_ctrl_type`
# V4L2 capability defines
replace define V4L2_CAP_VIDEO_CAPTURE device-capabilities

View File

@ -250,14 +250,14 @@ Users can read via ``ioctl(SECCOMP_IOCTL_NOTIF_RECV)`` (or ``poll()``) on a
seccomp notification fd to receive a ``struct seccomp_notif``, which contains
five members: the input length of the structure, a unique-per-filter ``id``,
the ``pid`` of the task which triggered this request (which may be 0 if the
task is in a pid ns not visible from the listener's pid namespace), a ``flags``
member which for now only has ``SECCOMP_NOTIF_FLAG_SIGNALED``, representing
whether or not the notification is a result of a non-fatal signal, and the
``data`` passed to seccomp. Userspace can then make a decision based on this
information about what to do, and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a
response, indicating what should be returned to userspace. The ``id`` member of
``struct seccomp_notif_resp`` should be the same ``id`` as in ``struct
seccomp_notif``.
task is in a pid ns not visible from the listener's pid namespace). The
notification also contains the ``data`` passed to seccomp, and a filters flag.
The structure should be zeroed out prior to calling the ioctl.
Userspace can then make a decision based on this information about what to do,
and ``ioctl(SECCOMP_IOCTL_NOTIF_SEND)`` a response, indicating what should be
returned to userspace. The ``id`` member of ``struct seccomp_notif_resp`` should
be the same ``id`` as in ``struct seccomp_notif``.
It is worth noting that ``struct seccomp_data`` contains the values of register
arguments to the syscall, but does not contain pointers to memory. The task's

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 10
SUBLEVEL = 41
SUBLEVEL = 43
EXTRAVERSION =
NAME = Dare mighty things
@ -449,7 +449,6 @@ OBJCOPY = llvm-objcopy
OBJDUMP = llvm-objdump
READELF = llvm-readelf
STRIP = llvm-strip
KBUILD_HOSTLDFLAGS += -fuse-ld=lld --rtlib=compiler-rt
else
CC = $(CROSS_COMPILE)gcc
LD = $(CROSS_COMPILE)ld
@ -920,10 +919,10 @@ endif
ifdef CONFIG_LTO_CLANG
ifdef CONFIG_LTO_CLANG_THIN
CC_FLAGS_LTO += -flto=thin -fsplit-lto-unit
CC_FLAGS_LTO := -flto=thin -fsplit-lto-unit
KBUILD_LDFLAGS += --thinlto-cache-dir=$(extmod-prefix).thinlto-cache
else
CC_FLAGS_LTO += -flto
CC_FLAGS_LTO := -flto
endif
ifeq ($(SRCARCH),x86)
@ -1591,7 +1590,7 @@ endif # CONFIG_MODULES
# Directories & files removed with 'make clean'
CLEAN_FILES += include/ksym vmlinux.symvers modules-only.symvers \
modules.builtin modules.builtin.modinfo modules.nsdeps \
compile_commands.json
compile_commands.json .thinlto-cache
# Directories & files removed with 'make mrproper'
MRPROPER_FILES += include/config include/generated \
@ -1605,7 +1604,7 @@ MRPROPER_FILES += include/config include/generated \
*.spec
# Directories & files removed with 'make distclean'
DISTCLEAN_FILES += tags TAGS cscope* GPATH GTAGS GRTAGS GSYMS .thinlto-cache
DISTCLEAN_FILES += tags TAGS cscope* GPATH GTAGS GRTAGS GSYMS
# clean - Delete most, but leave enough to build external modules
#

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,149 @@
[abi_symbol_list]
# required by fips140.ko
add_random_ready_callback
aead_register_instance
bcmp
cancel_work_sync
__cfi_slowpath
cpu_have_feature
crypto_aead_decrypt
crypto_aead_encrypt
crypto_aead_setauthsize
crypto_aead_setkey
crypto_ahash_finup
crypto_ahash_setkey
crypto_alg_list
crypto_alg_mod_lookup
crypto_alg_sem
crypto_alloc_base
crypto_alloc_rng
crypto_alloc_shash
crypto_attr_alg_name
crypto_check_attr_type
crypto_cipher_encrypt_one
crypto_cipher_setkey
crypto_destroy_tfm
crypto_drop_spawn
crypto_get_default_null_skcipher
crypto_grab_aead
crypto_grab_ahash
crypto_grab_shash
crypto_grab_skcipher
crypto_inst_setname
crypto_put_default_null_skcipher
crypto_register_aead
crypto_register_alg
crypto_register_rngs
crypto_register_shash
crypto_register_shashes
crypto_register_skciphers
crypto_register_template
crypto_register_templates
crypto_remove_final
crypto_remove_spawns
crypto_req_done
crypto_shash_alg_has_setkey
crypto_shash_digest
crypto_shash_final
crypto_shash_finup
crypto_shash_setkey
crypto_shash_tfm_digest
crypto_shash_update
crypto_skcipher_decrypt
crypto_skcipher_encrypt
crypto_skcipher_setkey
crypto_spawn_tfm2
crypto_unregister_aead
crypto_unregister_alg
crypto_unregister_rngs
crypto_unregister_shash
crypto_unregister_shashes
crypto_unregister_skciphers
crypto_unregister_template
crypto_unregister_templates
del_random_ready_callback
down_write
fpsimd_context_busy
get_random_bytes
__init_swait_queue_head
irq_stat
kasan_flag_enabled
kernel_neon_begin
kernel_neon_end
kfree
kfree_sensitive
__kmalloc
kmalloc_caches
kmalloc_order_trace
kmem_cache_alloc_trace
__list_add_valid
__list_del_entry_valid
memcpy
memset
__mutex_init
mutex_lock
mutex_unlock
panic
preempt_schedule
preempt_schedule_notrace
printk
queue_work_on
scatterwalk_ffwd
scatterwalk_map_and_copy
sg_init_one
sg_init_table
sg_next
shash_free_singlespawn_instance
shash_register_instance
skcipher_alloc_instance_simple
skcipher_register_instance
skcipher_walk_aead_decrypt
skcipher_walk_aead_encrypt
skcipher_walk_done
skcipher_walk_virt
snprintf
__stack_chk_fail
__stack_chk_guard
strcmp
strlcat
strlcpy
strlen
strncmp
synchronize_rcu_tasks
system_wq
__traceiter_android_vh_aes_decrypt
__traceiter_android_vh_aes_encrypt
__traceiter_android_vh_aes_expandkey
__traceiter_android_vh_sha256
__tracepoint_android_vh_aes_decrypt
__tracepoint_android_vh_aes_encrypt
__tracepoint_android_vh_aes_expandkey
__tracepoint_android_vh_sha256
tracepoint_probe_register
up_write
wait_for_completion
# needed by fips140.ko but not identified by the tooling
# TODO(b/189327973): [GKI: ABI] Build of fips140.ko module fails to identify some symbols
__crypto_memneq
__crypto_xor
aes_decrypt
aes_encrypt
aes_expandkey
ce_aes_expandkey
crypto_aes_inv_sbox
crypto_aes_sbox
crypto_aes_set_key
crypto_ft_tab
crypto_inc
crypto_it_tab
crypto_sha1_finup
crypto_sha1_update
gf128mul_lle
sha1_transform
sha224_final
sha256
sha256_block_data_order
sha256_final
sha256_update

View File

@ -263,6 +263,7 @@
crypto_alloc_aead
crypto_alloc_base
crypto_alloc_shash
crypto_alloc_sync_skcipher
crypto_comp_compress
crypto_comp_decompress
crypto_destroy_tfm
@ -271,6 +272,8 @@
crypto_shash_digest
crypto_shash_finup
crypto_shash_setkey
crypto_skcipher_encrypt
crypto_skcipher_setkey
crypto_unregister_alg
crypto_unregister_scomp
csum_ipv6_magic
@ -690,6 +693,7 @@
drm_rotation_simplify
drm_self_refresh_helper_alter_state
drm_send_event
drm_send_event_locked
drm_universal_plane_init
drm_vblank_init
drm_writeback_connector_init
@ -722,6 +726,7 @@
extcon_set_property
extcon_set_property_capability
extcon_set_state_sync
extcon_unregister_notifier
failure_tracking
fasync_helper
__fdget
@ -1383,6 +1388,7 @@
printk_deferred
proc_create
proc_create_data
proc_create_single_data
proc_dointvec
proc_dostring
proc_douintvec_minmax
@ -1763,6 +1769,7 @@
synchronize_net
synchronize_rcu
syscon_regmap_lookup_by_phandle
sysctl_sched_latency
sysfs_add_file_to_group
sysfs_create_file_ns
sysfs_create_files
@ -1829,6 +1836,8 @@
trace_event_reg
trace_handle_return
__traceiter_android_rvh_cgroup_force_kthread_migration
__traceiter_android_rvh_check_preempt_wakeup
__traceiter_android_rvh_cpu_cgroup_online
__traceiter_android_rvh_cpu_overutilized
__traceiter_android_rvh_dequeue_task
__traceiter_android_rvh_find_energy_efficient_cpu
@ -1850,7 +1859,9 @@
__traceiter_android_vh_cpu_idle_exit
__traceiter_android_vh_enable_thermal_genl_check
__traceiter_android_vh_ep_create_wakeup_source
__traceiter_android_vh_finish_update_load_avg_se
__traceiter_android_vh_ipi_stop
__traceiter_android_vh_meminfo_proc_show
__traceiter_android_vh_of_i2c_get_board_info
__traceiter_android_vh_pagecache_get_page
__traceiter_android_vh_rmqueue
@ -1860,6 +1871,7 @@
__traceiter_android_vh_typec_tcpci_override_toggling
__traceiter_android_vh_typec_tcpm_adj_current_limit
__traceiter_android_vh_typec_tcpm_get_timer
__traceiter_android_vh_typec_tcpm_log
__traceiter_android_vh_ufs_check_int_errors
__traceiter_android_vh_ufs_compl_command
__traceiter_android_vh_ufs_fill_prdt
@ -1891,6 +1903,8 @@
__traceiter_suspend_resume
trace_output_call
__tracepoint_android_rvh_cgroup_force_kthread_migration
__tracepoint_android_rvh_check_preempt_wakeup
__tracepoint_android_rvh_cpu_cgroup_online
__tracepoint_android_rvh_cpu_overutilized
__tracepoint_android_rvh_dequeue_task
__tracepoint_android_rvh_find_energy_efficient_cpu
@ -1912,7 +1926,9 @@
__tracepoint_android_vh_cpu_idle_exit
__tracepoint_android_vh_enable_thermal_genl_check
__tracepoint_android_vh_ep_create_wakeup_source
__tracepoint_android_vh_finish_update_load_avg_se
__tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_meminfo_proc_show
__tracepoint_android_vh_of_i2c_get_board_info
__tracepoint_android_vh_pagecache_get_page
__tracepoint_android_vh_rmqueue
@ -1922,6 +1938,7 @@
__tracepoint_android_vh_typec_tcpci_override_toggling
__tracepoint_android_vh_typec_tcpm_adj_current_limit
__tracepoint_android_vh_typec_tcpm_get_timer
__tracepoint_android_vh_typec_tcpm_log
__tracepoint_android_vh_ufs_check_int_errors
__tracepoint_android_vh_ufs_compl_command
__tracepoint_android_vh_ufs_fill_prdt
@ -1956,6 +1973,7 @@
trace_print_array_seq
trace_print_bitmask_seq
trace_print_flags_seq
trace_print_hex_seq
trace_print_symbols_seq
trace_raw_output_prep
trace_seq_printf
@ -2137,6 +2155,7 @@
vm_map_pages
vm_map_ram
vm_unmap_ram
vprintk
vring_del_virtqueue
vring_interrupt
vring_new_virtqueue

View File

@ -276,6 +276,8 @@
cpu_latency_qos_remove_request
cpu_latency_qos_request_active
cpu_latency_qos_update_request
cpu_maps_update_begin
cpu_maps_update_done
cpumask_any_but
cpumask_next
cpumask_next_and
@ -375,6 +377,7 @@
dev_alloc_name
dev_coredumpv
_dev_crit
__dev_direct_xmit
dev_driver_string
_dev_emerg
_dev_err
@ -391,6 +394,7 @@
__dev_get_by_index
dev_get_by_index
dev_get_by_name
dev_get_by_name_rcu
dev_get_regmap
device_add
device_add_disk
@ -860,6 +864,7 @@
drm_universal_plane_init
drm_vblank_init
drm_wait_one_vblank
dst_release
dump_stack
__dynamic_dev_dbg
__dynamic_pr_debug
@ -981,6 +986,7 @@
get_zeroed_page
gfp_zone
gic_nonsecure_priorities
gic_resume
gov_attr_set_init
gov_attr_set_put
governor_sysfs_ops
@ -1169,6 +1175,7 @@
__iowrite32_copy
ip_compute_csum
ipi_desc_get
ip_route_output_flow
iput
__ipv6_addr_type
ipv6_ext_hdr
@ -1227,6 +1234,7 @@
isolate_and_split_free_page
isolate_anon_lru_page
is_vmalloc_addr
iterate_fd
jiffies
jiffies_to_msecs
jiffies_to_usecs
@ -1436,7 +1444,10 @@
napi_gro_receive
__napi_schedule
napi_schedule_prep
neigh_destroy
__neigh_event_send
neigh_lookup
neigh_xmit
__netdev_alloc_skb
netdev_rx_handler_register
netdev_rx_handler_unregister
@ -1619,20 +1630,26 @@
param_set_copystring
param_set_int
pause_cpus
pci_aer_clear_nonfatal_status
pci_alloc_irq_vectors_affinity
pci_assign_resource
pci_bus_type
pci_clear_master
pci_d3cold_disable
pci_device_group
pci_device_is_present
pci_dev_present
pci_disable_device
pci_disable_msi
pci_disable_pcie_error_reporting
pcie_capability_clear_and_set_word
pcie_capability_read_word
pci_enable_device
pci_enable_pcie_error_reporting
pci_find_ext_capability
pci_free_irq_vectors
pci_get_device
pci_get_domain_bus_and_slot
pci_host_probe
pci_iomap
pci_irq_vector
@ -2113,6 +2130,7 @@
sg_pcopy_from_buffer
sg_pcopy_to_buffer
sg_scsi_ioctl
shmem_mark_page_lazyfree
shmem_truncate_range
show_rcu_gp_kthreads
show_regs
@ -2364,6 +2382,7 @@
task_may_not_preempt
__task_pid_nr_ns
__task_rq_lock
thermal_cooling_device_register
thermal_cooling_device_unregister
thermal_of_cooling_device_register
thermal_pressure
@ -2452,13 +2471,18 @@
__traceiter_android_vh_binder_wakeup_ilocked
__traceiter_android_vh_cpu_idle_enter
__traceiter_android_vh_cpu_idle_exit
__traceiter_android_vh_cpuidle_psci_enter
__traceiter_android_vh_cpuidle_psci_exit
__traceiter_android_vh_dump_throttled_rt_tasks
__traceiter_android_vh_force_compatible_post
__traceiter_android_vh_force_compatible_pre
__traceiter_android_vh_freq_table_limits
__traceiter_android_vh_ftrace_dump_buffer
__traceiter_android_vh_ftrace_format_check
__traceiter_android_vh_ftrace_oops_enter
__traceiter_android_vh_ftrace_oops_exit
__traceiter_android_vh_ftrace_size_check
__traceiter_android_vh_gic_resume
__traceiter_android_vh_gpio_block_read
__traceiter_android_vh_iommu_setup_dma_ops
__traceiter_android_vh_ipi_stop
@ -2542,19 +2566,25 @@
__tracepoint_android_vh_check_uninterruptible_tasks_dn
__tracepoint_android_vh_cpu_idle_enter
__tracepoint_android_vh_cpu_idle_exit
__tracepoint_android_vh_cpuidle_psci_enter
__tracepoint_android_vh_cpuidle_psci_exit
__tracepoint_android_vh_dump_throttled_rt_tasks
__tracepoint_android_vh_force_compatible_post
__tracepoint_android_vh_force_compatible_pre
__tracepoint_android_vh_freq_table_limits
__tracepoint_android_vh_ftrace_dump_buffer
__tracepoint_android_vh_ftrace_format_check
__tracepoint_android_vh_ftrace_oops_enter
__tracepoint_android_vh_ftrace_oops_exit
__tracepoint_android_vh_ftrace_size_check
__tracepoint_android_vh_gic_resume
__tracepoint_android_vh_gpio_block_read
__tracepoint_android_vh_iommu_setup_dma_ops
__tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_jiffies_update
__tracepoint_android_vh_logbuf
__tracepoint_android_vh_printk_hotplug
__tracepoint_android_vh_process_killed
__tracepoint_android_vh_psi_event
__tracepoint_android_vh_psi_group
__tracepoint_android_vh_scheduler_tick

View File

@ -0,0 +1 @@
crypto/fips140.ko

View File

@ -17,9 +17,9 @@
extern void clear_page(void *page);
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vmaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
extern void copy_page(void * _to, void * _from);
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)

View File

@ -1168,7 +1168,7 @@ timer2: timer@0 {
};
};
target-module@34000 { /* 0x48034000, ap 7 46.0 */
timer3_target: target-module@34000 { /* 0x48034000, ap 7 46.0 */
compatible = "ti,sysc-omap4-timer", "ti,sysc";
reg = <0x34000 0x4>,
<0x34010 0x4>;
@ -1195,7 +1195,7 @@ timer3: timer@0 {
};
};
target-module@36000 { /* 0x48036000, ap 9 4e.0 */
timer4_target: target-module@36000 { /* 0x48036000, ap 9 4e.0 */
compatible = "ti,sysc-omap4-timer", "ti,sysc";
reg = <0x36000 0x4>,
<0x36010 0x4>;

View File

@ -46,6 +46,7 @@ aliases {
timer {
compatible = "arm,armv7-timer";
status = "disabled"; /* See ARM architected timer wrap erratum i940 */
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
@ -1090,3 +1091,22 @@ timer@0 {
assigned-clock-parents = <&sys_32k_ck>;
};
};
/* Local timers, see ARM architected timer wrap erratum i940 */
&timer3_target {
ti,no-reset-on-init;
ti,no-idle;
timer@0 {
assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER3_CLKCTRL 24>;
assigned-clock-parents = <&timer_sys_clk_div>;
};
};
&timer4_target {
ti,no-reset-on-init;
ti,no-idle;
timer@0 {
assigned-clocks = <&l4per_clkctrl DRA7_L4PER_TIMER4_CLKCTRL 24>;
assigned-clock-parents = <&timer_sys_clk_div>;
};
};

View File

@ -105,9 +105,13 @@ &fec {
phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
phy-reset-duration = <20>;
phy-supply = <&sw2_reg>;
phy-handle = <&ethphy0>;
status = "okay";
fixed-link {
speed = <1000>;
full-duplex;
};
mdio {
#address-cells = <1>;
#size-cells = <0>;

View File

@ -406,6 +406,18 @@ &reg_soc {
vin-supply = <&sw1_reg>;
};
&reg_pu {
vin-supply = <&sw1_reg>;
};
&reg_vdd1p1 {
vin-supply = <&sw2_reg>;
};
&reg_vdd2p5 {
vin-supply = <&sw2_reg>;
};
&uart1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_uart1>;

View File

@ -126,7 +126,7 @@ boardid: gpio@3a {
compatible = "nxp,pca8574";
reg = <0x3a>;
gpio-controller;
#gpio-cells = <1>;
#gpio-cells = <2>;
};
};

View File

@ -193,7 +193,7 @@ &usdhc1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usdhc1>;
keep-power-in-suspend;
tuning-step = <2>;
fsl,tuning-step = <2>;
vmmc-supply = <&reg_3p3v>;
no-1-8-v;
broken-cd;

View File

@ -351,7 +351,7 @@ &usdhc1 {
pinctrl-2 = <&pinctrl_usdhc1_200mhz>;
cd-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>;
bus-width = <4>;
tuning-step = <2>;
fsl,tuning-step = <2>;
vmmc-supply = <&reg_3p3v>;
wakeup-source;
no-1-8-v;

View File

@ -666,6 +666,22 @@ config ARM64_ERRATUM_1508412
If unsure, say Y.
config ARM64_ERRATUM_1974925
bool "Kryo 7XX: 1974925: Incorrect read value for Performance Monitors Common Event Identification Register"
default y
depends on HW_PERF_EVENTS
help
This option adds a workaround for QCOM Kryo erratum 1974925.
Affected cores return an incorrect value of the AArch64 System
Register Performance Monitors Common Event Identification Register 0
(PMCEID0_EL0) in which bits corresponding to those events that are
fully implemented are not set, which is incorrect.
Work around this issue by manually setting those bits to true.
If unsure, say Y.
config CAVIUM_ERRATUM_22375
bool "Cavium erratum 22375, 24313"
default y

View File

@ -0,0 +1,38 @@
# SPDX-License-Identifier: GPL-2.0
#
# This file is included by the generic Kbuild makefile to permit the
# architecture to perform postlink actions on vmlinux and any .ko module file.
# In this case, we only need it for fips140.ko, which needs a HMAC digest to be
# injected into it. All other targets are NOPs.
#
PHONY := __archpost
__archpost:
-include include/config/auto.conf
include scripts/Kbuild.include
CMD_FIPS140_GEN_HMAC = crypto/fips140_gen_hmac
quiet_cmd_gen_hmac = HMAC $@
cmd_gen_hmac = $(CMD_FIPS140_GEN_HMAC) $@
# `@true` prevents complaints when there is nothing to be done
vmlinux: FORCE
@true
$(objtree)/crypto/fips140.ko: FORCE
$(call cmd,gen_hmac)
%.ko: FORCE
@true
clean:
@true
PHONY += FORCE clean
FORCE:
.PHONY: $(PHONY)

View File

@ -31,11 +31,10 @@ phy1: ethernet-phy@4 {
reg = <0x4>;
eee-broken-1000t;
eee-broken-100tx;
qca,clk-out-frequency = <125000000>;
qca,clk-out-strength = <AR803X_STRENGTH_FULL>;
vddio-supply = <&vddh>;
qca,keep-pll-enabled;
vddio-supply = <&vddio>;
vddio: vddio-regulator {
regulator-name = "VDDIO";

View File

@ -192,8 +192,8 @@ soc: soc {
ddr: memory-controller@1080000 {
compatible = "fsl,qoriq-memory-controller";
reg = <0x0 0x1080000 0x0 0x1000>;
interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
big-endian;
interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
little-endian;
};
dcfg: syscon@1e00000 {

View File

@ -45,8 +45,8 @@ pcie1_refclk: clock-pcie1-refclk {
reg_12p0_main: regulator-12p0-main {
compatible = "regulator-fixed";
regulator-name = "12V_MAIN";
regulator-min-microvolt = <5000000>;
regulator-max-microvolt = <5000000>;
regulator-min-microvolt = <12000000>;
regulator-max-microvolt = <12000000>;
regulator-always-on;
};

View File

@ -78,6 +78,8 @@ main_navss: bus@30000000 {
#size-cells = <2>;
ranges = <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>;
ti,sci-dev-id = <199>;
dma-coherent;
dma-ranges;
main_navss_intr: interrupt-controller1 {
compatible = "ti,sci-intr";

View File

@ -0,0 +1 @@
CONFIG_CRYPTO_FIPS140_MOD=y

View File

@ -227,20 +227,33 @@ CONFIG_MAC802154=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_PRIO=y
CONFIG_NET_SCH_MULTIQ=y
CONFIG_NET_SCH_SFQ=y
CONFIG_NET_SCH_TBF=y
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_CODEL=y
CONFIG_NET_SCH_FQ_CODEL=y
CONFIG_NET_SCH_FQ=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_BASIC=y
CONFIG_NET_CLS_TCINDEX=y
CONFIG_NET_CLS_FW=y
CONFIG_NET_CLS_U32=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_FLOW=y
CONFIG_NET_CLS_BPF=y
CONFIG_NET_CLS_MATCHALL=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_CMP=y
CONFIG_NET_EMATCH_NBYTE=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_EMATCH_META=y
CONFIG_NET_EMATCH_TEXT=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_NET_ACT_SKBEDIT=y
CONFIG_VSOCKETS=y
CONFIG_BPF_JIT=y
CONFIG_BT=y
@ -260,6 +273,7 @@ CONFIG_MAC80211=y
CONFIG_RFKILL=y
CONFIG_PCI=y
CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y
CONFIG_PCI_HOST_GENERIC=y
CONFIG_PCIE_DW_PLAT_EP=y
CONFIG_PCIE_QCOM=y
@ -301,17 +315,18 @@ CONFIG_WIREGUARD=y
CONFIG_IFB=y
CONFIG_TUN=y
CONFIG_VETH=y
CONFIG_PHYLIB=y
CONFIG_PPP=y
CONFIG_PPP_BSDCOMP=y
CONFIG_PPP_DEFLATE=y
CONFIG_PPP_MPPE=y
CONFIG_PPTP=y
CONFIG_PPPOL2TP=y
CONFIG_USB_RTL8150=y
CONFIG_USB_RTL8152=y
CONFIG_USB_USBNET=y
# CONFIG_USB_NET_AX8817X is not set
# CONFIG_USB_NET_AX88179_178A is not set
CONFIG_USB_NET_CDC_EEM=y
# CONFIG_USB_NET_NET1080 is not set
# CONFIG_USB_NET_CDC_SUBSET is not set
# CONFIG_USB_NET_ZAURUS is not set
@ -466,7 +481,9 @@ CONFIG_USB_CONFIGFS_UEVENT=y
CONFIG_USB_CONFIGFS_SERIAL=y
CONFIG_USB_CONFIGFS_ACM=y
CONFIG_USB_CONFIGFS_NCM=y
CONFIG_USB_CONFIGFS_ECM=y
CONFIG_USB_CONFIGFS_RNDIS=y
CONFIG_USB_CONFIGFS_EEM=y
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_F_ACC=y
@ -525,7 +542,6 @@ CONFIG_PWM=y
CONFIG_GENERIC_PHY=y
CONFIG_POWERCAP=y
CONFIG_DTPM=y
CONFIG_RAS=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ANDROID_BINDERFS=y
@ -560,6 +576,7 @@ CONFIG_PSTORE=y
CONFIG_PSTORE_CONSOLE=y
CONFIG_PSTORE_PMSG=y
CONFIG_PSTORE_RAM=y
CONFIG_EROFS_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y

View File

@ -0,0 +1,44 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Create a separate FIPS archive that duplicates the modules that are relevant
# for FIPS 140 certification as builtin objects
#
sha1-ce-y := sha1-ce-glue.o sha1-ce-core.o
sha2-ce-y := sha2-ce-glue.o sha2-ce-core.o
sha512-ce-y := sha512-ce-glue.o sha512-ce-core.o
ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o
aes-ce-cipher-y := aes-ce-core.o aes-ce-glue.o
aes-ce-blk-y := aes-glue-ce.o aes-ce.o
aes-neon-blk-y := aes-glue-neon.o aes-neon.o
sha256-arm64-y := sha256-glue.o sha256-core.o
sha512-arm64-y := sha512-glue.o sha512-core.o
aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
aes-neon-bs-y := aes-neonbs-core.o aes-neonbs-glue.o
crypto-arm64-fips-src := $(srctree)/arch/arm64/crypto/
crypto-arm64-fips-modules := sha1-ce.o sha2-ce.o sha512-ce.o ghash-ce.o \
aes-ce-cipher.o aes-ce-blk.o aes-neon-blk.o \
sha256-arm64.o sha512-arm64.o aes-arm64.o \
aes-neon-bs.o
crypto-fips-objs += $(foreach o,$(crypto-arm64-fips-modules),$($(o:.o=-y):.o=-fips-arch.o))
CFLAGS_aes-glue-ce-fips-arch.o := -DUSE_V8_CRYPTO_EXTENSIONS
$(obj)/aes-glue-%-fips-arch.o: KBUILD_CFLAGS += $(FIPS140_CFLAGS)
$(obj)/aes-glue-%-fips-arch.o: $(crypto-arm64-fips-src)/aes-glue.c FORCE
$(call if_changed_rule,cc_o_c)
$(obj)/%-fips-arch.o: KBUILD_CFLAGS += $(FIPS140_CFLAGS)
$(obj)/%-fips-arch.o: $(crypto-arm64-fips-src)/%.c FORCE
$(call if_changed_rule,cc_o_c)
$(obj)/%-fips-arch.o: $(crypto-arm64-fips-src)/%.S FORCE
$(call if_changed_rule,as_o_S)
$(obj)/%: $(crypto-arm64-fips-src)/%_shipped
$(call cmd,shipped)
$(obj)/%-fips-arch.o: $(obj)/%.S FORCE
$(call if_changed_rule,as_o_S)

View File

@ -88,16 +88,12 @@ config CRYPTO_AES_ARM64_CE_BLK
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_AES_ARM64_CE
select CRYPTO_AES_ARM64
select CRYPTO_SIMD
config CRYPTO_AES_ARM64_NEON_BLK
tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions"
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_AES_ARM64
select CRYPTO_LIB_AES
select CRYPTO_SIMD
config CRYPTO_CHACHA20_NEON
tristate "ChaCha20, XChaCha20, and XChaCha12 stream ciphers using NEON instructions"
@ -122,8 +118,6 @@ config CRYPTO_AES_ARM64_BS
depends on KERNEL_MODE_NEON
select CRYPTO_SKCIPHER
select CRYPTO_AES_ARM64_NEON_BLK
select CRYPTO_AES_ARM64
select CRYPTO_LIB_AES
select CRYPTO_SIMD
endif

View File

@ -442,7 +442,7 @@ static int __maybe_unused essiv_cbc_decrypt(struct skcipher_request *req)
return err ?: cbc_decrypt_walk(req, &walk);
}
static int ctr_encrypt(struct skcipher_request *req)
static int __maybe_unused ctr_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
@ -481,29 +481,6 @@ static int ctr_encrypt(struct skcipher_request *req)
return err;
}
static void ctr_encrypt_one(struct crypto_skcipher *tfm, const u8 *src, u8 *dst)
{
const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
unsigned long flags;
/*
* Temporarily disable interrupts to avoid races where
* cachelines are evicted when the CPU is interrupted
* to do something else.
*/
local_irq_save(flags);
aes_encrypt(ctx, dst, src);
local_irq_restore(flags);
}
static int __maybe_unused ctr_encrypt_sync(struct skcipher_request *req)
{
if (!crypto_simd_usable())
return crypto_ctr_encrypt_walk(req, ctr_encrypt_one);
return ctr_encrypt(req);
}
static int __maybe_unused xts_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@ -652,10 +629,9 @@ static int __maybe_unused xts_decrypt(struct skcipher_request *req)
static struct skcipher_alg aes_algs[] = { {
#if defined(USE_V8_CRYPTO_EXTENSIONS) || !IS_ENABLED(CONFIG_CRYPTO_AES_ARM64_BS)
.base = {
.cra_name = "__ecb(aes)",
.cra_driver_name = "__ecb-aes-" MODE,
.cra_name = "ecb(aes)",
.cra_driver_name = "ecb-aes-" MODE,
.cra_priority = PRIO,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
@ -667,10 +643,9 @@ static struct skcipher_alg aes_algs[] = { {
.decrypt = ecb_decrypt,
}, {
.base = {
.cra_name = "__cbc(aes)",
.cra_driver_name = "__cbc-aes-" MODE,
.cra_name = "cbc(aes)",
.cra_driver_name = "cbc-aes-" MODE,
.cra_priority = PRIO,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
@ -683,10 +658,9 @@ static struct skcipher_alg aes_algs[] = { {
.decrypt = cbc_decrypt,
}, {
.base = {
.cra_name = "__ctr(aes)",
.cra_driver_name = "__ctr-aes-" MODE,
.cra_name = "ctr(aes)",
.cra_driver_name = "ctr-aes-" MODE,
.cra_priority = PRIO,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
@ -700,26 +674,9 @@ static struct skcipher_alg aes_algs[] = { {
.decrypt = ctr_encrypt,
}, {
.base = {
.cra_name = "ctr(aes)",
.cra_driver_name = "ctr-aes-" MODE,
.cra_priority = PRIO - 1,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
},
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.chunksize = AES_BLOCK_SIZE,
.setkey = skcipher_aes_setkey,
.encrypt = ctr_encrypt_sync,
.decrypt = ctr_encrypt_sync,
}, {
.base = {
.cra_name = "__xts(aes)",
.cra_driver_name = "__xts-aes-" MODE,
.cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-" MODE,
.cra_priority = PRIO,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_xts_ctx),
.cra_module = THIS_MODULE,
@ -734,10 +691,9 @@ static struct skcipher_alg aes_algs[] = { {
}, {
#endif
.base = {
.cra_name = "__cts(cbc(aes))",
.cra_driver_name = "__cts-cbc-aes-" MODE,
.cra_name = "cts(cbc(aes))",
.cra_driver_name = "cts-cbc-aes-" MODE,
.cra_priority = PRIO,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
@ -751,10 +707,9 @@ static struct skcipher_alg aes_algs[] = { {
.decrypt = cts_cbc_decrypt,
}, {
.base = {
.cra_name = "__essiv(cbc(aes),sha256)",
.cra_driver_name = "__essiv-cbc-aes-sha256-" MODE,
.cra_name = "essiv(cbc(aes),sha256)",
.cra_driver_name = "essiv-cbc-aes-sha256-" MODE,
.cra_priority = PRIO + 1,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_essiv_cbc_ctx),
.cra_module = THIS_MODULE,
@ -993,28 +948,15 @@ static struct shash_alg mac_algs[] = { {
.descsize = sizeof(struct mac_desc_ctx),
} };
static struct simd_skcipher_alg *aes_simd_algs[ARRAY_SIZE(aes_algs)];
static void aes_exit(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(aes_simd_algs); i++)
if (aes_simd_algs[i])
simd_skcipher_free(aes_simd_algs[i]);
crypto_unregister_shashes(mac_algs, ARRAY_SIZE(mac_algs));
crypto_unregister_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
}
static int __init aes_init(void)
{
struct simd_skcipher_alg *simd;
const char *basename;
const char *algname;
const char *drvname;
int err;
int i;
err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
if (err)
@ -1024,26 +966,8 @@ static int __init aes_init(void)
if (err)
goto unregister_ciphers;
for (i = 0; i < ARRAY_SIZE(aes_algs); i++) {
if (!(aes_algs[i].base.cra_flags & CRYPTO_ALG_INTERNAL))
continue;
algname = aes_algs[i].base.cra_name + 2;
drvname = aes_algs[i].base.cra_driver_name + 2;
basename = aes_algs[i].base.cra_driver_name;
simd = simd_skcipher_create_compat(algname, drvname, basename);
err = PTR_ERR(simd);
if (IS_ERR(simd))
goto unregister_simds;
aes_simd_algs[i] = simd;
}
return 0;
unregister_simds:
aes_exit();
return err;
unregister_ciphers:
crypto_unregister_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
return err;

View File

@ -63,11 +63,6 @@ struct aesbs_cbc_ctx {
u32 enc[AES_MAX_KEYLENGTH_U32];
};
struct aesbs_ctr_ctx {
struct aesbs_ctx key; /* must be first member */
struct crypto_aes_ctx fallback;
};
struct aesbs_xts_ctx {
struct aesbs_ctx key;
u32 twkey[AES_MAX_KEYLENGTH_U32];
@ -207,25 +202,6 @@ static int cbc_decrypt(struct skcipher_request *req)
return err;
}
static int aesbs_ctr_setkey_sync(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm);
int err;
err = aes_expandkey(&ctx->fallback, in_key, key_len);
if (err)
return err;
ctx->key.rounds = 6 + key_len / 4;
kernel_neon_begin();
aesbs_convert_key(ctx->key.rk, ctx->fallback.key_enc, ctx->key.rounds);
kernel_neon_end();
return 0;
}
static int ctr_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@ -292,29 +268,6 @@ static int aesbs_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
return aesbs_setkey(tfm, in_key, key_len);
}
static void ctr_encrypt_one(struct crypto_skcipher *tfm, const u8 *src, u8 *dst)
{
struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm);
unsigned long flags;
/*
* Temporarily disable interrupts to avoid races where
* cachelines are evicted when the CPU is interrupted
* to do something else.
*/
local_irq_save(flags);
aes_encrypt(&ctx->fallback, dst, src);
local_irq_restore(flags);
}
static int ctr_encrypt_sync(struct skcipher_request *req)
{
if (!crypto_simd_usable())
return crypto_ctr_encrypt_walk(req, ctr_encrypt_one);
return ctr_encrypt(req);
}
static int __xts_crypt(struct skcipher_request *req, bool encrypt,
void (*fn)(u8 out[], u8 const in[], u8 const rk[],
int rounds, int blocks, u8 iv[]))
@ -431,13 +384,12 @@ static int xts_decrypt(struct skcipher_request *req)
}
static struct skcipher_alg aes_algs[] = { {
.base.cra_name = "__ecb(aes)",
.base.cra_driver_name = "__ecb-aes-neonbs",
.base.cra_name = "ecb(aes)",
.base.cra_driver_name = "ecb-aes-neonbs",
.base.cra_priority = 250,
.base.cra_blocksize = AES_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct aesbs_ctx),
.base.cra_module = THIS_MODULE,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
@ -446,13 +398,12 @@ static struct skcipher_alg aes_algs[] = { {
.encrypt = ecb_encrypt,
.decrypt = ecb_decrypt,
}, {
.base.cra_name = "__cbc(aes)",
.base.cra_driver_name = "__cbc-aes-neonbs",
.base.cra_name = "cbc(aes)",
.base.cra_driver_name = "cbc-aes-neonbs",
.base.cra_priority = 250,
.base.cra_blocksize = AES_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct aesbs_cbc_ctx),
.base.cra_module = THIS_MODULE,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
@ -462,13 +413,12 @@ static struct skcipher_alg aes_algs[] = { {
.encrypt = cbc_encrypt,
.decrypt = cbc_decrypt,
}, {
.base.cra_name = "__ctr(aes)",
.base.cra_driver_name = "__ctr-aes-neonbs",
.base.cra_name = "ctr(aes)",
.base.cra_driver_name = "ctr-aes-neonbs",
.base.cra_priority = 250,
.base.cra_blocksize = 1,
.base.cra_ctxsize = sizeof(struct aesbs_ctx),
.base.cra_module = THIS_MODULE,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
@ -479,29 +429,12 @@ static struct skcipher_alg aes_algs[] = { {
.encrypt = ctr_encrypt,
.decrypt = ctr_encrypt,
}, {
.base.cra_name = "ctr(aes)",
.base.cra_driver_name = "ctr-aes-neonbs",
.base.cra_priority = 250 - 1,
.base.cra_blocksize = 1,
.base.cra_ctxsize = sizeof(struct aesbs_ctr_ctx),
.base.cra_module = THIS_MODULE,
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
.chunksize = AES_BLOCK_SIZE,
.walksize = 8 * AES_BLOCK_SIZE,
.ivsize = AES_BLOCK_SIZE,
.setkey = aesbs_ctr_setkey_sync,
.encrypt = ctr_encrypt_sync,
.decrypt = ctr_encrypt_sync,
}, {
.base.cra_name = "__xts(aes)",
.base.cra_driver_name = "__xts-aes-neonbs",
.base.cra_name = "xts(aes)",
.base.cra_driver_name = "xts-aes-neonbs",
.base.cra_priority = 250,
.base.cra_blocksize = AES_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct aesbs_xts_ctx),
.base.cra_module = THIS_MODULE,
.base.cra_flags = CRYPTO_ALG_INTERNAL,
.min_keysize = 2 * AES_MIN_KEY_SIZE,
.max_keysize = 2 * AES_MAX_KEY_SIZE,
@ -512,54 +445,17 @@ static struct skcipher_alg aes_algs[] = { {
.decrypt = xts_decrypt,
} };
static struct simd_skcipher_alg *aes_simd_algs[ARRAY_SIZE(aes_algs)];
static void aes_exit(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(aes_simd_algs); i++)
if (aes_simd_algs[i])
simd_skcipher_free(aes_simd_algs[i]);
crypto_unregister_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
}
static int __init aes_init(void)
{
struct simd_skcipher_alg *simd;
const char *basename;
const char *algname;
const char *drvname;
int err;
int i;
if (!cpu_have_named_feature(ASIMD))
return -ENODEV;
err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
if (err)
return err;
for (i = 0; i < ARRAY_SIZE(aes_algs); i++) {
if (!(aes_algs[i].base.cra_flags & CRYPTO_ALG_INTERNAL))
continue;
algname = aes_algs[i].base.cra_name + 2;
drvname = aes_algs[i].base.cra_driver_name + 2;
basename = aes_algs[i].base.cra_driver_name;
simd = simd_skcipher_create_compat(algname, drvname, basename);
err = PTR_ERR(simd);
if (IS_ERR(simd))
goto unregister_simds;
aes_simd_algs[i] = simd;
}
return 0;
unregister_simds:
aes_exit();
return err;
return crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
}
module_init(aes_init);

View File

@ -9,6 +9,7 @@
/* A64 instructions are always 32 bits. */
#define AARCH64_INSN_SIZE 4
#ifndef BUILD_FIPS140_KO
#ifndef __ASSEMBLY__
#include <linux/stringify.h>
@ -214,4 +215,33 @@ alternative_endif
#define ALTERNATIVE(oldinstr, newinstr, ...) \
_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
#else
/*
* The FIPS140 module does not support alternatives patching, as this
* invalidates the HMAC digest of the .text section. However, some alternatives
* are known to be irrelevant so we can tolerate them in the FIPS140 module, as
* they will never be applied in the first place in the use cases that the
* FIPS140 module targets (Android running on a production phone). Any other
* uses of alternatives should be avoided, as it is not safe in the general
* case to simply use the default sequence in one place (the fips module) and
* the alternative sequence everywhere else.
*
* Below is an allowlist of features that we can ignore, by simply taking the
* safe default instruction sequence. Note that this implies that the FIPS140
* module is not compatible with VHE, or with pseudo-NMI support.
*/
#define __ALT_ARM64_HAS_LDAPR 0,
#define __ALT_ARM64_HAS_VIRT_HOST_EXTN 0,
#define __ALT_ARM64_HAS_IRQ_PRIO_MASKING 0,
#define ALTERNATIVE(oldinstr, newinstr, feature, ...) \
_ALTERNATIVE(oldinstr, __ALT_ ## feature, #feature)
#define _ALTERNATIVE(oldinstr, feature, feature_str) \
__take_second_arg(feature oldinstr, \
".err Feature " feature_str " not supported in fips140 module")
#endif /* BUILD_FIPS140_KO */
#endif /* __ASM_ALTERNATIVE_MACROS_H */

View File

@ -463,4 +463,9 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu->arch.flags |= KVM_ARM64_INCREMENT_PC;
}
static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature)
{
return test_bit(feature, vcpu->arch.features);
}
#endif /* __ARM64_KVM_EMULATE_H__ */

View File

@ -3,5 +3,34 @@ SECTIONS {
.plt 0 (NOLOAD) : { BYTE(0) }
.init.plt 0 (NOLOAD) : { BYTE(0) }
.text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) }
#ifdef CONFIG_CRYPTO_FIPS140
/*
* The FIPS140 module incorporates copies of builtin code, which gets
* integrity checked at module load time, and registered in a way that
* ensures that the integrity checked versions supersede the builtin
* ones. These objects are compiled as builtin code, and so their init
* hooks will be exported from the binary in the same way as builtin
* initcalls are, i.e., annotated with a level that defines the order
* in which the hooks are expected to be invoked.
*/
#define INIT_CALLS_LEVEL(level) \
KEEP(*(.initcall##level##.init*)) \
KEEP(*(.initcall##level##s.init*))
.initcalls : {
*(.initcalls._start)
INIT_CALLS_LEVEL(0)
INIT_CALLS_LEVEL(1)
INIT_CALLS_LEVEL(2)
INIT_CALLS_LEVEL(3)
INIT_CALLS_LEVEL(4)
INIT_CALLS_LEVEL(5)
INIT_CALLS_LEVEL(rootfs)
INIT_CALLS_LEVEL(6)
INIT_CALLS_LEVEL(7)
*(.initcalls._end)
}
#endif
}
#endif

View File

@ -48,43 +48,84 @@ static inline u8 mte_get_random_tag(void)
return mte_get_ptr_tag(addr);
}
static inline u64 __stg_post(u64 p)
{
asm volatile(__MTE_PREAMBLE "stg %0, [%0], #16"
: "+r"(p)
:
: "memory");
return p;
}
static inline u64 __stzg_post(u64 p)
{
asm volatile(__MTE_PREAMBLE "stzg %0, [%0], #16"
: "+r"(p)
:
: "memory");
return p;
}
static inline void __dc_gva(u64 p)
{
asm volatile(__MTE_PREAMBLE "dc gva, %0" : : "r"(p) : "memory");
}
static inline void __dc_gzva(u64 p)
{
asm volatile(__MTE_PREAMBLE "dc gzva, %0" : : "r"(p) : "memory");
}
/*
* Assign allocation tags for a region of memory based on the pointer tag.
* Note: The address must be non-NULL and MTE_GRANULE_SIZE aligned and
* size must be non-zero and MTE_GRANULE_SIZE aligned.
* size must be MTE_GRANULE_SIZE aligned.
*/
static inline void mte_set_mem_tag_range(void *addr, size_t size,
u8 tag, bool init)
static inline void mte_set_mem_tag_range(void *addr, size_t size, u8 tag,
bool init)
{
u64 curr, end;
u64 curr, mask, dczid_bs, end1, end2, end3;
if (!size)
return;
/* Read DC G(Z)VA block size from the system register. */
dczid_bs = 4ul << (read_cpuid(DCZID_EL0) & 0xf);
curr = (u64)__tag_set(addr, tag);
end = curr + size;
mask = dczid_bs - 1;
/* STG/STZG up to the end of the first block. */
end1 = curr | mask;
end3 = curr + size;
/* DC GVA / GZVA in [end1, end2) */
end2 = end3 & ~mask;
/*
* 'asm volatile' is required to prevent the compiler to move
* the statement outside of the loop.
* The following code uses STG on the first DC GVA block even if the
* start address is aligned - it appears to be faster than an alignment
* check + conditional branch. Also, if the range size is at least 2 DC
* GVA blocks, the first two loops can use post-condition to save one
* branch each.
*/
if (init) {
do {
asm volatile(__MTE_PREAMBLE "stzg %0, [%0]"
:
: "r" (curr)
: "memory");
curr += MTE_GRANULE_SIZE;
} while (curr != end);
} else {
do {
asm volatile(__MTE_PREAMBLE "stg %0, [%0]"
:
: "r" (curr)
: "memory");
curr += MTE_GRANULE_SIZE;
} while (curr != end);
}
#define SET_MEMTAG_RANGE(stg_post, dc_gva) \
do { \
if (size >= 2 * dczid_bs) { \
do { \
curr = stg_post(curr); \
} while (curr < end1); \
\
do { \
dc_gva(curr); \
curr += dczid_bs; \
} while (curr < end2); \
} \
\
while (curr < end3) \
curr = stg_post(curr); \
} while (0)
if (init)
SET_MEMTAG_RANGE(__stzg_post, __dc_gzva);
else
SET_MEMTAG_RANGE(__stg_post, __dc_gva);
#undef SET_MEMTAG_RANGE
}
void mte_enable_kernel_sync(void);

View File

@ -28,9 +28,9 @@ void copy_user_highpage(struct page *to, struct page *from,
void copy_highpage(struct page *to, struct page *from);
#define __HAVE_ARCH_COPY_HIGHPAGE
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO | __GFP_CMA, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)

View File

@ -35,9 +35,7 @@ static __must_check inline bool may_use_simd(void)
* migrated, and if it's clear we cannot be migrated to a CPU
* where it is set.
*/
return !WARN_ON(!system_capabilities_finalized()) &&
system_supports_fpsimd() &&
!in_irq() && !irqs_disabled() && !in_nmi() &&
return !in_irq() && !irqs_disabled() && !in_nmi() &&
!this_cpu_read(fpsimd_context_busy);
}

View File

@ -153,6 +153,7 @@ SYM_CODE_START_LOCAL(enter_vhe)
// Invalidate TLBs before enabling the MMU
tlbi vmalle1
dsb nsh
isb
// Enable the EL2 S1 MMU, as set up from EL1
mrs_s x0, SYS_SCTLR_EL12

View File

@ -7,6 +7,7 @@
#include <linux/ftrace.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/sort.h>
static struct plt_entry __get_adrp_add_pair(u64 dst, u64 pc,
@ -290,6 +291,7 @@ static int partition_branch_plt_relas(Elf64_Sym *syms, Elf64_Rela *rela,
int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
char *secstrings, struct module *mod)
{
bool copy_rela_for_fips140 = false;
unsigned long core_plts = 0;
unsigned long init_plts = 0;
Elf64_Sym *syms = NULL;
@ -321,6 +323,10 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
return -ENOEXEC;
}
if (IS_ENABLED(CONFIG_CRYPTO_FIPS140) &&
!strcmp(mod->name, "fips140"))
copy_rela_for_fips140 = true;
for (i = 0; i < ehdr->e_shnum; i++) {
Elf64_Rela *rels = (void *)ehdr + sechdrs[i].sh_offset;
int nents, numrels = sechdrs[i].sh_size / sizeof(Elf64_Rela);
@ -329,10 +335,38 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
if (sechdrs[i].sh_type != SHT_RELA)
continue;
#ifdef CONFIG_CRYPTO_FIPS140
if (copy_rela_for_fips140 &&
!strcmp(secstrings + dstsec->sh_name, ".rodata")) {
void *p = kmemdup(rels, numrels * sizeof(Elf64_Rela),
GFP_KERNEL);
if (!p) {
pr_err("fips140: failed to allocate .rodata RELA buffer\n");
return -ENOMEM;
}
mod->arch.rodata_relocations = p;
mod->arch.num_rodata_relocations = numrels;
}
#endif
/* ignore relocations that operate on non-exec sections */
if (!(dstsec->sh_flags & SHF_EXECINSTR))
continue;
#ifdef CONFIG_CRYPTO_FIPS140
if (copy_rela_for_fips140 &&
!strcmp(secstrings + dstsec->sh_name, ".text")) {
void *p = kmemdup(rels, numrels * sizeof(Elf64_Rela),
GFP_KERNEL);
if (!p) {
pr_err("fips140: failed to allocate .text RELA buffer\n");
return -ENOMEM;
}
mod->arch.text_relocations = p;
mod->arch.num_text_relocations = numrels;
}
#endif
/*
* sort branch relocations requiring a PLT by type, symbol index
* and addend

View File

@ -24,6 +24,9 @@
#include <linux/sched_clock.h>
#include <linux/smp.h>
#define ERRATUM_1974925_MASK (BIT(12) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | BIT(24) | BIT(25) \
| BIT(26) | BIT(27))
/* ARMv8 Cortex-A53 specific event types. */
#define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2
@ -1014,6 +1017,13 @@ struct armv8pmu_probe_info {
bool present;
};
#ifdef CONFIG_ARM64_ERRATUM_1974925
static void armv8pmu_overwrite_pmceid0_el0(u64 *val)
{
(*val) |= (ERRATUM_1974925_MASK | (ERRATUM_1974925_MASK << 32));
}
#endif
static void __armv8pmu_probe_pmu(void *info)
{
struct armv8pmu_probe_info *probe = info;
@ -1039,7 +1049,11 @@ static void __armv8pmu_probe_pmu(void *info)
/* Add the CPU cycles counter */
cpu_pmu->num_events += 1;
pmceid[0] = pmceid_raw[0] = read_sysreg(pmceid0_el0);
pmceid_raw[0] = read_sysreg(pmceid0_el0);
#ifdef CONFIG_ARM64_ERRATUM_1974925
armv8pmu_overwrite_pmceid0_el0(&pmceid_raw[0]);
#endif
pmceid[0] = pmceid_raw[0];
pmceid[1] = pmceid_raw[1] = read_sysreg(pmceid1_el0);
bitmap_from_arr32(cpu_pmu->pmceid_bitmap,

View File

@ -1888,8 +1888,10 @@ static int init_hyp_mode(void)
if (is_protected_kvm_enabled()) {
init_cpu_logical_map();
if (!init_psci_relay())
if (!init_psci_relay()) {
err = -ENODEV;
goto out_err;
}
}
if (is_protected_kvm_enabled()) {

View File

@ -4,7 +4,7 @@
#
asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__
hostprogs := gen-hyprel
HOST_EXTRACFLAGS += -I$(objtree)/include

View File

@ -50,6 +50,18 @@
#ifndef R_AARCH64_ABS64
#define R_AARCH64_ABS64 257
#endif
#ifndef R_AARCH64_PREL64
#define R_AARCH64_PREL64 260
#endif
#ifndef R_AARCH64_PREL32
#define R_AARCH64_PREL32 261
#endif
#ifndef R_AARCH64_PREL16
#define R_AARCH64_PREL16 262
#endif
#ifndef R_AARCH64_PLT32
#define R_AARCH64_PLT32 314
#endif
#ifndef R_AARCH64_LD_PREL_LO19
#define R_AARCH64_LD_PREL_LO19 273
#endif
@ -371,6 +383,12 @@ static void emit_rela_section(Elf64_Shdr *sh_rela)
case R_AARCH64_ABS64:
emit_rela_abs64(rela, sh_orig_name);
break;
/* Allow position-relative data relocations. */
case R_AARCH64_PREL64:
case R_AARCH64_PREL32:
case R_AARCH64_PREL16:
case R_AARCH64_PLT32:
break;
/* Allow relocations to generate PC-relative addressing. */
case R_AARCH64_LD_PREL_LO19:
case R_AARCH64_ADR_PREL_LO21:

View File

@ -23,8 +23,8 @@
extern unsigned long hyp_nr_cpus;
struct host_kvm host_kvm;
struct hyp_pool host_s2_mem;
struct hyp_pool host_s2_dev;
static struct hyp_pool host_s2_mem;
static struct hyp_pool host_s2_dev;
/*
* Copies of the host's CPU features registers holding sanitized values.

View File

@ -166,6 +166,25 @@ static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu)
return 0;
}
static bool vcpu_allowed_register_width(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu *tmp;
bool is32bit;
int i;
is32bit = vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT);
if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1) && is32bit)
return false;
/* Check that the vcpus are either all 32bit or all 64bit */
kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
if (vcpu_has_feature(tmp, KVM_ARM_VCPU_EL1_32BIT) != is32bit)
return false;
}
return true;
}
/**
* kvm_reset_vcpu - sets core registers and sys_regs to reset value
* @vcpu: The VCPU pointer
@ -217,13 +236,14 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
}
}
if (!vcpu_allowed_register_width(vcpu)) {
ret = -EINVAL;
goto out;
}
switch (vcpu->arch.target) {
default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1)) {
ret = -EINVAL;
goto out;
}
pstate = VCPU_RESET_PSTATE_SVC;
} else {
pstate = VCPU_RESET_PSTATE_EL1;

View File

@ -399,14 +399,14 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *rd)
{
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
if (p->is_write)
reg_to_dbg(vcpu, p, rd, dbg_reg);
else
dbg_to_reg(vcpu, p, rd, dbg_reg);
trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg);
trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
return true;
}
@ -414,7 +414,7 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -424,7 +424,7 @@ static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -434,21 +434,21 @@ static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static void reset_bvr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_bvr[rd->reg] = rd->val;
vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm] = rd->val;
}
static bool trap_bcr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *rd)
{
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
if (p->is_write)
reg_to_dbg(vcpu, p, rd, dbg_reg);
else
dbg_to_reg(vcpu, p, rd, dbg_reg);
trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg);
trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
return true;
}
@ -456,7 +456,7 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -467,7 +467,7 @@ static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -477,22 +477,22 @@ static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static void reset_bcr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_bcr[rd->reg] = rd->val;
vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm] = rd->val;
}
static bool trap_wvr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *rd)
{
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm];
if (p->is_write)
reg_to_dbg(vcpu, p, rd, dbg_reg);
else
dbg_to_reg(vcpu, p, rd, dbg_reg);
trace_trap_reg(__func__, rd->reg, p->is_write,
vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg]);
trace_trap_reg(__func__, rd->CRm, p->is_write,
vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm]);
return true;
}
@ -500,7 +500,7 @@ static bool trap_wvr(struct kvm_vcpu *vcpu,
static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm];
if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -510,7 +510,7 @@ static int set_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm];
if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -520,21 +520,21 @@ static int get_wvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static void reset_wvr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_wvr[rd->reg] = rd->val;
vcpu->arch.vcpu_debug_state.dbg_wvr[rd->CRm] = rd->val;
}
static bool trap_wcr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *rd)
{
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
if (p->is_write)
reg_to_dbg(vcpu, p, rd, dbg_reg);
else
dbg_to_reg(vcpu, p, rd, dbg_reg);
trace_trap_reg(__func__, rd->reg, p->is_write, *dbg_reg);
trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
return true;
}
@ -542,7 +542,7 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -552,7 +552,7 @@ static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
const struct kvm_one_reg *reg, void __user *uaddr)
{
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg];
__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
return -EFAULT;
@ -562,7 +562,7 @@ static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
static void reset_wcr(struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
vcpu->arch.vcpu_debug_state.dbg_wcr[rd->reg] = rd->val;
vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm] = rd->val;
}
static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)

View File

@ -477,7 +477,8 @@ static void __init map_mem(pgd_t *pgdp)
int flags = 0;
u64 i;
if (rodata_full || debug_pagealloc_enabled())
if (rodata_full || debug_pagealloc_enabled() ||
IS_ENABLED(CONFIG_KFENCE))
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*

View File

@ -46,7 +46,7 @@
#endif
#ifdef CONFIG_KASAN_HW_TAGS
#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1
#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1
#else
#define TCR_KASAN_HW_FLAGS 0
#endif

View File

@ -82,16 +82,16 @@ do { \
} while (0)
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
({ \
struct page *page = alloc_page_vma( \
GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr); \
GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr); \
if (page) \
flush_dcache_page(page); \
page; \
})
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)

View File

@ -13,9 +13,9 @@ extern unsigned long memory_end;
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
#define __pa(vaddr) ((unsigned long)(vaddr))
#define __va(paddr) ((void *)((unsigned long)(paddr)))

View File

@ -18,6 +18,7 @@
#include <asm/reboot.h>
#include <asm/setup.h>
#include <asm/mach-au1x00/au1000.h>
#include <asm/mach-au1x00/gpio-au1000.h>
#include <prom.h>
const char *get_system_type(void)

View File

@ -8,6 +8,7 @@
#include <linux/io.h>
#include <linux/clk.h>
#include <linux/export.h>
#include <linux/init.h>
#include <linux/sizes.h>
#include <linux/of_fdt.h>
@ -25,6 +26,7 @@
__iomem void *rt_sysc_membase;
__iomem void *rt_memc_membase;
EXPORT_SYMBOL_GPL(rt_sysc_membase);
__iomem void *plat_of_remap_node(const char *node)
{

View File

@ -0,0 +1,9 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_BARRIER_H
#define __ASM_BARRIER_H
#define mb() asm volatile ("l.msync" ::: "memory")
#include <asm-generic/barrier.h>
#endif /* __ASM_BARRIER_H */

View File

@ -61,6 +61,7 @@ config PARISC
select HAVE_KRETPROBES
select HAVE_DYNAMIC_FTRACE if $(cc-option,-fpatchable-function-entry=1,1)
select HAVE_FTRACE_MCOUNT_RECORD if HAVE_DYNAMIC_FTRACE
select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY if DYNAMIC_FTRACE
select HAVE_KPROBES_ON_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select SET_FS

View File

@ -21,8 +21,6 @@ typedef struct {
unsigned long sig[_NSIG_WORDS];
} sigset_t;
#define __ARCH_UAPI_SA_FLAGS _SA_SIGGFAULT
#include <asm/sigcontext.h>
#endif /* !__ASSEMBLY */

View File

@ -108,7 +108,6 @@ int arch_prepare_kprobe(struct kprobe *p)
int ret = 0;
struct kprobe *prev;
struct ppc_inst insn = ppc_inst_read((struct ppc_inst *)p->addr);
struct ppc_inst prefix = ppc_inst_read((struct ppc_inst *)(p->addr - 1));
if ((unsigned long)p->addr & 0x03) {
printk("Attempt to register kprobe at an unaligned address\n");
@ -116,7 +115,8 @@ int arch_prepare_kprobe(struct kprobe *p)
} else if (IS_MTMSRD(insn) || IS_RFID(insn) || IS_RFI(insn)) {
printk("Cannot register a kprobe on rfi/rfid or mtmsr[d]\n");
ret = -EINVAL;
} else if (ppc_inst_prefixed(prefix)) {
} else if ((unsigned long)p->addr & ~PAGE_MASK &&
ppc_inst_prefixed(ppc_inst_read((struct ppc_inst *)(p->addr - 1)))) {
printk("Cannot register a kprobe on the second word of prefixed instruction\n");
ret = -EINVAL;
}

View File

@ -23,7 +23,7 @@ ifneq ($(c-gettimeofday-y),)
endif
# Build rules
targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-dummy.o
targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-syms.S
obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
obj-y += vdso.o vdso-syms.o
@ -41,7 +41,7 @@ KASAN_SANITIZE := n
$(obj)/vdso.o: $(obj)/vdso.so
# link rule for the .so file, .lds has to be first
$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
$(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE
$(call if_changed,vdsold)
LDFLAGS_vdso.so.dbg = -shared -s -soname=linux-vdso.so.1 \
--build-id=sha1 --hash-style=both --eh-frame-hdr

View File

@ -68,9 +68,9 @@ static inline void copy_page(void *to, void *from)
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
/*
* These are used to make use of C type-checking..

View File

@ -177,11 +177,6 @@ ifeq ($(ACCUMULATE_OUTGOING_ARGS), 1)
KBUILD_CFLAGS += $(call cc-option,-maccumulate-outgoing-args,)
endif
ifdef CONFIG_LTO_CLANG
KBUILD_LDFLAGS += -plugin-opt=-code-model=kernel \
-plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8)
endif
# Workaround for a gcc prelease that unfortunately was shipped in a suse release
KBUILD_CFLAGS += -Wno-sign-compare
#
@ -201,7 +196,12 @@ ifdef CONFIG_RETPOLINE
endif
endif
KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE)
KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE)
ifdef CONFIG_LTO_CLANG
KBUILD_LDFLAGS += -plugin-opt=-code-model=kernel \
-plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8)
endif
ifdef CONFIG_X86_NEED_RELOCS
LDFLAGS_vmlinux := --emit-relocs --discard-none

View File

@ -204,20 +204,33 @@ CONFIG_MAC802154=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_PRIO=y
CONFIG_NET_SCH_MULTIQ=y
CONFIG_NET_SCH_SFQ=y
CONFIG_NET_SCH_TBF=y
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_CODEL=y
CONFIG_NET_SCH_FQ_CODEL=y
CONFIG_NET_SCH_FQ=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_BASIC=y
CONFIG_NET_CLS_TCINDEX=y
CONFIG_NET_CLS_FW=y
CONFIG_NET_CLS_U32=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_FLOW=y
CONFIG_NET_CLS_BPF=y
CONFIG_NET_CLS_MATCHALL=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_CMP=y
CONFIG_NET_EMATCH_NBYTE=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_EMATCH_META=y
CONFIG_NET_EMATCH_TEXT=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_NET_ACT_SKBEDIT=y
CONFIG_VSOCKETS=y
CONFIG_BPF_JIT=y
CONFIG_BT=y
@ -237,6 +250,7 @@ CONFIG_MAC80211=y
CONFIG_RFKILL=y
CONFIG_PCI=y
CONFIG_PCIEPORTBUS=y
CONFIG_PCIEAER=y
CONFIG_PCI_MSI=y
CONFIG_PCIE_DW_PLAT_EP=y
CONFIG_PCI_ENDPOINT=y
@ -274,17 +288,18 @@ CONFIG_WIREGUARD=y
CONFIG_IFB=y
CONFIG_TUN=y
CONFIG_VETH=y
CONFIG_PHYLIB=y
CONFIG_PPP=y
CONFIG_PPP_BSDCOMP=y
CONFIG_PPP_DEFLATE=y
CONFIG_PPP_MPPE=y
CONFIG_PPTP=y
CONFIG_PPPOL2TP=y
CONFIG_USB_RTL8150=y
CONFIG_USB_RTL8152=y
CONFIG_USB_USBNET=y
# CONFIG_USB_NET_AX8817X is not set
# CONFIG_USB_NET_AX88179_178A is not set
CONFIG_USB_NET_CDC_EEM=y
# CONFIG_USB_NET_NET1080 is not set
# CONFIG_USB_NET_CDC_SUBSET is not set
# CONFIG_USB_NET_ZAURUS is not set
@ -419,7 +434,9 @@ CONFIG_USB_CONFIGFS_UEVENT=y
CONFIG_USB_CONFIGFS_SERIAL=y
CONFIG_USB_CONFIGFS_ACM=y
CONFIG_USB_CONFIGFS_NCM=y
CONFIG_USB_CONFIGFS_ECM=y
CONFIG_USB_CONFIGFS_RNDIS=y
CONFIG_USB_CONFIGFS_EEM=y
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_F_FS=y
CONFIG_USB_CONFIGFS_F_ACC=y
@ -459,7 +476,6 @@ CONFIG_IIO_BUFFER=y
CONFIG_IIO_TRIGGER=y
CONFIG_POWERCAP=y
CONFIG_DTPM=y
CONFIG_RAS=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ANDROID_BINDERFS=y
@ -495,6 +511,7 @@ CONFIG_PSTORE=y
CONFIG_PSTORE_CONSOLE=y
CONFIG_PSTORE_PMSG=y
CONFIG_PSTORE_RAM=y
CONFIG_EROFS_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y

View File

@ -174,6 +174,7 @@ static inline int apic_is_clustered_box(void)
extern int setup_APIC_eilvt(u8 lvt_off, u8 vector, u8 msg_type, u8 mask);
extern void lapic_assign_system_vectors(void);
extern void lapic_assign_legacy_vector(unsigned int isairq, bool replace);
extern void lapic_update_legacy_vectors(void);
extern void lapic_online(void);
extern void lapic_offline(void);
extern bool apic_needs_pit(void);

View File

@ -56,11 +56,8 @@
# define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31))
#endif
#ifdef CONFIG_IOMMU_SUPPORT
# define DISABLE_ENQCMD 0
#else
# define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31))
#endif
/* Force disable because it's broken beyond repair */
#define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31))
/*
* Make sure to add features to the correct mask

View File

@ -79,10 +79,6 @@ extern int cpu_has_xfeatures(u64 xfeatures_mask, const char **feature_name);
*/
#define PASID_DISABLED 0
#ifdef CONFIG_IOMMU_SUPPORT
/* Update current's PASID MSR/state by mm's PASID. */
void update_pasid(void);
#else
static inline void update_pasid(void) { }
#endif
#endif /* _ASM_X86_FPU_API_H */

View File

@ -584,13 +584,6 @@ static inline void switch_fpu_finish(struct fpu *new_fpu)
pkru_val = pk->pkru;
}
__write_pkru(pkru_val);
/*
* Expensive PASID MSR write will be avoided in update_pasid() because
* TIF_NEED_FPU_LOAD was set. And the PASID state won't be updated
* unless it's different from mm->pasid to reduce overhead.
*/
update_pasid();
}
#endif /* _ASM_X86_FPU_INTERNAL_H */

View File

@ -7,8 +7,6 @@
#include <linux/interrupt.h>
#include <uapi/asm/kvm_para.h>
extern void kvmclock_init(void);
#ifdef CONFIG_KVM_GUEST
bool kvm_check_and_clear_guest_paused(void);
#else
@ -86,13 +84,14 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
}
#ifdef CONFIG_KVM_GUEST
void kvmclock_init(void);
void kvmclock_disable(void);
bool kvm_para_available(void);
unsigned int kvm_arch_para_features(void);
unsigned int kvm_arch_para_hints(void);
void kvm_async_pf_task_wait_schedule(u32 token);
void kvm_async_pf_task_wake(u32 token);
u32 kvm_read_and_reset_apf_flags(void);
void kvm_disable_steal_time(void);
bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token);
DECLARE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
@ -137,11 +136,6 @@ static inline u32 kvm_read_and_reset_apf_flags(void)
return 0;
}
static inline void kvm_disable_steal_time(void)
{
return;
}
static __always_inline bool kvm_handle_async_pf(struct pt_regs *regs, u32 token)
{
return false;

View File

@ -34,9 +34,9 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
copy_page(to, from);
}
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define alloc_zeroed_user_highpage_movable(vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO | __GFP_CMA, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
#ifndef __pa
#define __pa(x) __phys_addr((unsigned long)(x))

View File

@ -2539,6 +2539,7 @@ static void __init apic_bsp_setup(bool upmode)
end_local_APIC_setup();
irq_remap_enable_fault_handling();
setup_IO_APIC();
lapic_update_legacy_vectors();
}
#ifdef CONFIG_UP_LATE_INIT

View File

@ -687,6 +687,26 @@ void lapic_assign_legacy_vector(unsigned int irq, bool replace)
irq_matrix_assign_system(vector_matrix, ISA_IRQ_VECTOR(irq), replace);
}
void __init lapic_update_legacy_vectors(void)
{
unsigned int i;
if (IS_ENABLED(CONFIG_X86_IO_APIC) && nr_ioapics > 0)
return;
/*
* If the IO/APIC is disabled via config, kernel command line or
* lack of enumeration then all legacy interrupts are routed
* through the PIC. Make sure that they are marked as legacy
* vectors. PIC_CASCADE_IRQ has already been marked in
* lapic_assign_system_vectors().
*/
for (i = 0; i < nr_legacy_irqs(); i++) {
if (i != PIC_CASCADE_IR)
lapic_assign_legacy_vector(i, true);
}
}
void __init lapic_assign_system_vectors(void)
{
unsigned int i, vector = 0;

View File

@ -1402,60 +1402,3 @@ int proc_pid_arch_status(struct seq_file *m, struct pid_namespace *ns,
return 0;
}
#endif /* CONFIG_PROC_PID_ARCH_STATUS */
#ifdef CONFIG_IOMMU_SUPPORT
void update_pasid(void)
{
u64 pasid_state;
u32 pasid;
if (!cpu_feature_enabled(X86_FEATURE_ENQCMD))
return;
if (!current->mm)
return;
pasid = READ_ONCE(current->mm->pasid);
/* Set the valid bit in the PASID MSR/state only for valid pasid. */
pasid_state = pasid == PASID_DISABLED ?
pasid : pasid | MSR_IA32_PASID_VALID;
/*
* No need to hold fregs_lock() since the task's fpstate won't
* be changed by others (e.g. ptrace) while the task is being
* switched to or is in IPI.
*/
if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {
/* The MSR is active and can be directly updated. */
wrmsrl(MSR_IA32_PASID, pasid_state);
} else {
struct fpu *fpu = &current->thread.fpu;
struct ia32_pasid_state *ppasid_state;
struct xregs_state *xsave;
/*
* The CPU's xstate registers are not currently active. Just
* update the PASID state in the memory buffer here. The
* PASID MSR will be loaded when returning to user mode.
*/
xsave = &fpu->state.xsave;
xsave->header.xfeatures |= XFEATURE_MASK_PASID;
ppasid_state = get_xsave_addr(xsave, XFEATURE_PASID);
/*
* Since XFEATURE_MASK_PASID is set in xfeatures, ppasid_state
* won't be NULL and no need to check its value.
*
* Only update the task's PASID state when it's different
* from the mm's pasid.
*/
if (ppasid_state->pasid != pasid_state) {
/*
* Invalid fpregs so that state restoring will pick up
* the PASID state.
*/
__fpu_invalidate_fpregs_state(fpu);
ppasid_state->pasid = pasid_state;
}
}
}
#endif /* CONFIG_IOMMU_SUPPORT */

View File

@ -26,6 +26,7 @@
#include <linux/kprobes.h>
#include <linux/nmi.h>
#include <linux/swait.h>
#include <linux/syscore_ops.h>
#include <asm/timer.h>
#include <asm/cpu.h>
#include <asm/traps.h>
@ -37,6 +38,7 @@
#include <asm/tlb.h>
#include <asm/cpuidle_haltpoll.h>
#include <asm/ptrace.h>
#include <asm/reboot.h>
#include <asm/svm.h>
DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
@ -374,6 +376,14 @@ static void kvm_pv_disable_apf(void)
pr_info("Unregister pv shared memory for cpu %d\n", smp_processor_id());
}
static void kvm_disable_steal_time(void)
{
if (!has_steal_clock)
return;
wrmsr(MSR_KVM_STEAL_TIME, 0, 0);
}
static void kvm_pv_guest_cpu_reboot(void *unused)
{
/*
@ -416,14 +426,6 @@ static u64 kvm_steal_clock(int cpu)
return steal;
}
void kvm_disable_steal_time(void)
{
if (!has_steal_clock)
return;
wrmsr(MSR_KVM_STEAL_TIME, 0, 0);
}
static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
{
early_set_memory_decrypted((unsigned long) ptr, size);
@ -460,6 +462,27 @@ static bool pv_tlb_flush_supported(void)
static DEFINE_PER_CPU(cpumask_var_t, __pv_cpu_mask);
static void kvm_guest_cpu_offline(bool shutdown)
{
kvm_disable_steal_time();
if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
wrmsrl(MSR_KVM_PV_EOI_EN, 0);
kvm_pv_disable_apf();
if (!shutdown)
apf_task_wake_all();
kvmclock_disable();
}
static int kvm_cpu_online(unsigned int cpu)
{
unsigned long flags;
local_irq_save(flags);
kvm_guest_cpu_init();
local_irq_restore(flags);
return 0;
}
#ifdef CONFIG_SMP
static bool pv_ipi_supported(void)
@ -587,30 +610,47 @@ static void __init kvm_smp_prepare_boot_cpu(void)
kvm_spinlock_init();
}
static void kvm_guest_cpu_offline(void)
{
kvm_disable_steal_time();
if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
wrmsrl(MSR_KVM_PV_EOI_EN, 0);
kvm_pv_disable_apf();
apf_task_wake_all();
}
static int kvm_cpu_online(unsigned int cpu)
{
local_irq_disable();
kvm_guest_cpu_init();
local_irq_enable();
return 0;
}
static int kvm_cpu_down_prepare(unsigned int cpu)
{
local_irq_disable();
kvm_guest_cpu_offline();
local_irq_enable();
unsigned long flags;
local_irq_save(flags);
kvm_guest_cpu_offline(false);
local_irq_restore(flags);
return 0;
}
#endif
static int kvm_suspend(void)
{
kvm_guest_cpu_offline(false);
return 0;
}
static void kvm_resume(void)
{
kvm_cpu_online(raw_smp_processor_id());
}
static struct syscore_ops kvm_syscore_ops = {
.suspend = kvm_suspend,
.resume = kvm_resume,
};
/*
* After a PV feature is registered, the host will keep writing to the
* registered memory location. If the guest happens to shutdown, this memory
* won't be valid. In cases like kexec, in which you install a new kernel, this
* means a random memory location will be kept being written.
*/
#ifdef CONFIG_KEXEC_CORE
static void kvm_crash_shutdown(struct pt_regs *regs)
{
kvm_guest_cpu_offline(true);
native_machine_crash_shutdown(regs);
}
#endif
static void kvm_flush_tlb_others(const struct cpumask *cpumask,
@ -681,6 +721,12 @@ static void __init kvm_guest_init(void)
kvm_guest_cpu_init();
#endif
#ifdef CONFIG_KEXEC_CORE
machine_ops.crash_shutdown = kvm_crash_shutdown;
#endif
register_syscore_ops(&kvm_syscore_ops);
/*
* Hard lockup detection is enabled by default. Disable it, as guests
* can get false positives too easily, for example if the host is

View File

@ -20,7 +20,6 @@
#include <asm/hypervisor.h>
#include <asm/mem_encrypt.h>
#include <asm/x86_init.h>
#include <asm/reboot.h>
#include <asm/kvmclock.h>
static int kvmclock __initdata = 1;
@ -204,28 +203,9 @@ static void kvm_setup_secondary_clock(void)
}
#endif
/*
* After the clock is registered, the host will keep writing to the
* registered memory location. If the guest happens to shutdown, this memory
* won't be valid. In cases like kexec, in which you install a new kernel, this
* means a random memory location will be kept being written. So before any
* kind of shutdown from our side, we unregister the clock by writing anything
* that does not have the 'enable' bit set in the msr
*/
#ifdef CONFIG_KEXEC_CORE
static void kvm_crash_shutdown(struct pt_regs *regs)
void kvmclock_disable(void)
{
native_write_msr(msr_kvm_system_time, 0, 0);
kvm_disable_steal_time();
native_machine_crash_shutdown(regs);
}
#endif
static void kvm_shutdown(void)
{
native_write_msr(msr_kvm_system_time, 0, 0);
kvm_disable_steal_time();
native_machine_shutdown();
}
static void __init kvmclock_init_mem(void)
@ -352,10 +332,6 @@ void __init kvmclock_init(void)
#endif
x86_platform.save_sched_clock_state = kvm_save_sched_clock_state;
x86_platform.restore_sched_clock_state = kvm_restore_sched_clock_state;
machine_ops.shutdown = kvm_shutdown;
#ifdef CONFIG_KEXEC_CORE
machine_ops.crash_shutdown = kvm_crash_shutdown;
#endif
kvm_get_preset_lpj();
/*

View File

@ -2362,7 +2362,7 @@ static int cr_interception(struct vcpu_svm *svm)
err = 0;
if (cr >= 16) { /* mov to cr */
cr -= 16;
val = kvm_register_read(&svm->vcpu, reg);
val = kvm_register_readl(&svm->vcpu, reg);
trace_kvm_cr_write(cr, val);
switch (cr) {
case 0:
@ -2408,7 +2408,7 @@ static int cr_interception(struct vcpu_svm *svm)
kvm_queue_exception(&svm->vcpu, UD_VECTOR);
return 1;
}
kvm_register_write(&svm->vcpu, reg, val);
kvm_register_writel(&svm->vcpu, reg, val);
trace_kvm_cr_read(cr, val);
}
return kvm_complete_insn_gp(&svm->vcpu, err);
@ -2439,13 +2439,13 @@ static int dr_interception(struct vcpu_svm *svm)
if (dr >= 16) { /* mov to DRn */
if (!kvm_require_dr(&svm->vcpu, dr - 16))
return 1;
val = kvm_register_read(&svm->vcpu, reg);
val = kvm_register_readl(&svm->vcpu, reg);
kvm_set_dr(&svm->vcpu, dr - 16, val);
} else {
if (!kvm_require_dr(&svm->vcpu, dr))
return 1;
kvm_get_dr(&svm->vcpu, dr, &val);
kvm_register_write(&svm->vcpu, reg, val);
kvm_register_writel(&svm->vcpu, reg, val);
}
return kvm_skip_emulated_instruction(&svm->vcpu);

View File

@ -3006,6 +3006,8 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
st->preempted & KVM_VCPU_FLUSH_TLB);
if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
kvm_vcpu_flush_tlb_guest(vcpu);
} else {
st->preempted = 0;
}
vcpu->arch.st.preempted = 0;

View File

@ -504,10 +504,6 @@ void __init sme_enable(struct boot_params *bp)
#define AMD_SME_BIT BIT(0)
#define AMD_SEV_BIT BIT(1)
/* Check the SEV MSR whether SEV or SME is enabled */
sev_status = __rdmsr(MSR_AMD64_SEV);
feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
/*
* Check for the SME/SEV feature:
* CPUID Fn8000_001F[EAX]
@ -519,11 +515,16 @@ void __init sme_enable(struct boot_params *bp)
eax = 0x8000001f;
ecx = 0;
native_cpuid(&eax, &ebx, &ecx, &edx);
if (!(eax & feature_mask))
/* Check whether SEV or SME is supported */
if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT)))
return;
me_mask = 1UL << (ebx & 0x3f);
/* Check the SEV MSR whether SEV or SME is enabled */
sev_status = __rdmsr(MSR_AMD64_SEV);
feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
/* Check if memory encryption is enabled */
if (feature_mask == AMD_SME_BIT) {
/*

View File

@ -2210,10 +2210,9 @@ static void bfq_remove_request(struct request_queue *q,
}
static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
static bool bfq_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
struct request_queue *q = hctx->queue;
struct bfq_data *bfqd = q->elevator->elevator_data;
struct request *free = NULL;
/*

View File

@ -348,14 +348,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
struct elevator_queue *e = q->elevator;
struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
struct blk_mq_ctx *ctx;
struct blk_mq_hw_ctx *hctx;
bool ret = false;
enum hctx_type type;
if (e && e->type->ops.bio_merge)
return e->type->ops.bio_merge(hctx, bio, nr_segs);
return e->type->ops.bio_merge(q, bio, nr_segs);
ctx = blk_mq_get_ctx(q);
hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
type = hctx->type;
if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) ||
list_empty_careful(&ctx->rq_lists[type]))

View File

@ -562,11 +562,12 @@ static void kyber_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
}
}
static bool kyber_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
static bool kyber_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, bio->bi_opf, ctx);
struct kyber_hctx_data *khd = hctx->sched_data;
struct blk_mq_ctx *ctx = blk_mq_get_ctx(hctx->queue);
struct kyber_ctx_queue *kcq = &khd->kcqs[ctx->index_hw[hctx->type]];
unsigned int sched_domain = kyber_sched_domain(bio->bi_opf);
struct list_head *rq_list = &kcq->rq_list[sched_domain];

View File

@ -461,10 +461,9 @@ static int dd_request_merge(struct request_queue *q, struct request **rq,
return ELEVATOR_NO_MERGE;
}
static bool dd_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio,
static bool dd_bio_merge(struct request_queue *q, struct bio *bio,
unsigned int nr_segs)
{
struct request_queue *q = hctx->queue;
struct deadline_data *dd = q->elevator->elevator_data;
struct request *free = NULL;
bool ret;

View File

@ -1,5 +1,5 @@
BRANCH=android12-5.10
KMI_GENERATION=5
KMI_GENERATION=6
LLVM=1
DEPMOD=depmod

View File

@ -18,6 +18,7 @@ android/abi_gki_aarch64_generic
android/abi_gki_aarch64_exynos
android/abi_gki_aarch64_mtk
android/abi_gki_aarch64_xiaomi
android/abi_gki_aarch64_fips140
"
FILES="${FILES}

View File

@ -0,0 +1,17 @@
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.gki.aarch64
FILES="${FILES}
crypto/fips140.ko
"
if [ "${LTO}" = "none" ]; then
echo "The FIPS140 module needs LTO to be enabled."
exit 1
fi
MODULES_ORDER=android/gki_aarch64_fips140_modules
DEFCONFIG=fips140_gki_defconfig
KMI_SYMBOL_LIST=android/abi_gki_aarch64_fips140
PRE_DEFCONFIG_CMDS="cat ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/gki_defconfig ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/fips140_gki.fragment > ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/${DEFCONFIG};"
POST_DEFCONFIG_CMDS="rm ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/${DEFCONFIG}"

View File

@ -32,6 +32,14 @@ config CRYPTO_FIPS
certification. You should say no unless you know what
this is.
config CRYPTO_FIPS140
def_bool y
depends on MODULES && ARM64 && ARM64_MODULE_PLTS
config CRYPTO_FIPS140_MOD
bool "Enable FIPS140 integrity self-checked loadable module"
depends on LTO_CLANG && CRYPTO_FIPS140
config CRYPTO_ALGAPI
tristate
select CRYPTO_ALGAPI2

View File

@ -197,3 +197,43 @@ obj-$(CONFIG_ASYMMETRIC_KEY_TYPE) += asymmetric_keys/
obj-$(CONFIG_CRYPTO_HASH_INFO) += hash_info.o
crypto_simd-y := simd.o
obj-$(CONFIG_CRYPTO_SIMD) += crypto_simd.o
ifneq ($(CONFIG_CRYPTO_FIPS140_MOD),)
FIPS140_CFLAGS := -D__DISABLE_EXPORTS -DBUILD_FIPS140_KO
#
# Create a separate FIPS archive containing a duplicate of each builtin generic
# module that is in scope for FIPS 140-2 certification
#
crypto-fips-objs := drbg.o ecb.o cbc.o ctr.o gcm.o xts.o hmac.o memneq.o \
gf128mul.o aes_generic.o lib-crypto-aes.o \
sha1_generic.o sha256_generic.o sha512_generic.o \
lib-sha1.o lib-crypto-sha256.o
crypto-fips-objs := $(foreach o,$(crypto-fips-objs),$(o:.o=-fips.o))
# get the arch to add its objects to $(crypto-fips-objs)
include $(srctree)/arch/$(ARCH)/crypto/Kbuild.fips140
extra-$(CONFIG_CRYPTO_FIPS140_MOD) += crypto-fips.a
$(obj)/%-fips.o: KBUILD_CFLAGS += $(FIPS140_CFLAGS)
$(obj)/%-fips.o: $(src)/%.c FORCE
$(call if_changed_rule,cc_o_c)
$(obj)/lib-%-fips.o: $(srctree)/lib/%.c FORCE
$(call if_changed_rule,cc_o_c)
$(obj)/lib-crypto-%-fips.o: $(srctree)/lib/crypto/%.c FORCE
$(call if_changed_rule,cc_o_c)
$(obj)/crypto-fips.a: $(addprefix $(obj)/,$(crypto-fips-objs)) FORCE
$(call if_changed,ar_and_symver)
fips140-objs := fips140-module.o crypto-fips.a
obj-m += fips140.o
CFLAGS_fips140-module.o += $(FIPS140_CFLAGS)
hostprogs-always-y := fips140_gen_hmac
HOSTLDLIBS_fips140_gen_hmac := -lcrypto -lelf
endif

630
crypto/fips140-module.c Normal file
View File

@ -0,0 +1,630 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright 2021 Google LLC
* Author: Ard Biesheuvel <ardb@google.com>
*
* This file is the core of the fips140.ko, which carries a number of crypto
* algorithms and chaining mode templates that are also built into vmlinux.
* This modules performs a load time integrity check, as mandated by FIPS 140,
* and replaces registered crypto algorithms that appear on the FIPS 140 list
* with ones provided by this module. This meets the FIPS 140 requirements for
* a cryptographic software module.
*/
#define pr_fmt(fmt) "fips140: " fmt
#include <linux/ctype.h>
#include <linux/module.h>
#include <crypto/aead.h>
#include <crypto/aes.h>
#include <crypto/hash.h>
#include <crypto/sha.h>
#include <crypto/skcipher.h>
#include <crypto/rng.h>
#include <trace/hooks/fips140.h>
#include "internal.h"
/*
* FIPS 140-2 prefers the use of HMAC with a public key over a plain hash.
*/
u8 __initdata fips140_integ_hmac_key[] = "The quick brown fox jumps over the lazy dog";
/* this is populated by the build tool */
u8 __initdata fips140_integ_hmac_digest[SHA256_DIGEST_SIZE];
const u32 __initcall_start_marker __section(".initcalls._start");
const u32 __initcall_end_marker __section(".initcalls._end");
const u8 __fips140_text_start __section(".text.._start");
const u8 __fips140_text_end __section(".text.._end");
const u8 __fips140_rodata_start __section(".rodata.._start");
const u8 __fips140_rodata_end __section(".rodata.._end");
/*
* We need this little detour to prevent Clang from detecting out of bounds
* accesses to __fips140_text_start and __fips140_rodata_start, which only exist
* to delineate the section, and so their sizes are not relevant to us.
*/
const u32 *__initcall_start = &__initcall_start_marker;
const u8 *__text_start = &__fips140_text_start;
const u8 *__rodata_start = &__fips140_rodata_start;
static const char fips140_algorithms[][22] __initconst = {
"aes",
"gcm(aes)",
"ecb(aes)",
"cbc(aes)",
"ctr(aes)",
"xts(aes)",
"hmac(sha1)",
"hmac(sha224)",
"hmac(sha256)",
"hmac(sha384)",
"hmac(sha512)",
"sha1",
"sha224",
"sha256",
"sha384",
"sha512",
"drbg_nopr_ctr_aes256",
"drbg_nopr_ctr_aes192",
"drbg_nopr_ctr_aes128",
"drbg_nopr_hmac_sha512",
"drbg_nopr_hmac_sha384",
"drbg_nopr_hmac_sha256",
"drbg_nopr_hmac_sha1",
"drbg_nopr_sha512",
"drbg_nopr_sha384",
"drbg_nopr_sha256",
"drbg_nopr_sha1",
"drbg_pr_ctr_aes256",
"drbg_pr_ctr_aes192",
"drbg_pr_ctr_aes128",
"drbg_pr_hmac_sha512",
"drbg_pr_hmac_sha384",
"drbg_pr_hmac_sha256",
"drbg_pr_hmac_sha1",
"drbg_pr_sha512",
"drbg_pr_sha384",
"drbg_pr_sha256",
"drbg_pr_sha1",
};
static bool __init is_fips140_algo(struct crypto_alg *alg)
{
int i;
/*
* All software algorithms are synchronous, hardware algorithms must
* be covered by their own FIPS 140 certification.
*/
if (alg->cra_flags & CRYPTO_ALG_ASYNC)
return false;
for (i = 0; i < ARRAY_SIZE(fips140_algorithms); i++)
if (!strcmp(alg->cra_name, fips140_algorithms[i]))
return true;
return false;
}
static LIST_HEAD(unchecked_fips140_algos);
static void __init unregister_existing_fips140_algos(void)
{
struct crypto_alg *alg, *tmp;
LIST_HEAD(remove_list);
LIST_HEAD(spawns);
down_write(&crypto_alg_sem);
/*
* Find all registered algorithms that we care about, and move them to
* a private list so that they are no longer exposed via the algo
* lookup API. Subsequently, we will unregister them if they are not in
* active use. If they are, we cannot simply remove them but we can
* adapt them later to use our integrity checked backing code.
*/
list_for_each_entry_safe(alg, tmp, &crypto_alg_list, cra_list) {
if (is_fips140_algo(alg)) {
if (refcount_read(&alg->cra_refcnt) == 1) {
/*
* This algorithm is not currently in use, but
* there may be template instances holding
* references to it via spawns. So let's tear
* it down like crypto_unregister_alg() would,
* but without releasing the lock, to prevent
* races with concurrent TFM allocations.
*/
alg->cra_flags |= CRYPTO_ALG_DEAD;
list_move(&alg->cra_list, &remove_list);
crypto_remove_spawns(alg, &spawns, NULL);
} else {
/*
* This algorithm is live, i.e., there are TFMs
* allocated that rely on it for its crypto
* transformations. We will swap these out
* later with integrity checked versions.
*/
list_move(&alg->cra_list,
&unchecked_fips140_algos);
}
}
}
/*
* We haven't taken a reference to the algorithms on the remove_list,
* so technically, we may be competing with a concurrent invocation of
* crypto_unregister_alg() here. Fortunately, crypto_unregister_alg()
* just gives up with a warning if the algo that is being unregistered
* has already disappeared, so this happens to be safe. That does mean
* we need to hold on to the lock, to ensure that the algo is either on
* the list or it is not, and not in some limbo state.
*/
crypto_remove_final(&remove_list);
crypto_remove_final(&spawns);
up_write(&crypto_alg_sem);
}
static void __init unapply_text_relocations(void *section, int section_size,
const Elf64_Rela *rela, int numrels)
{
while (numrels--) {
u32 *place = (u32 *)(section + rela->r_offset);
BUG_ON(rela->r_offset >= section_size);
switch (ELF64_R_TYPE(rela->r_info)) {
#ifdef CONFIG_ARM64
case R_AARCH64_JUMP26:
case R_AARCH64_CALL26:
*place &= ~GENMASK(25, 0);
break;
case R_AARCH64_ADR_PREL_LO21:
case R_AARCH64_ADR_PREL_PG_HI21:
case R_AARCH64_ADR_PREL_PG_HI21_NC:
*place &= ~(GENMASK(30, 29) | GENMASK(23, 5));
break;
case R_AARCH64_ADD_ABS_LO12_NC:
case R_AARCH64_LDST8_ABS_LO12_NC:
case R_AARCH64_LDST16_ABS_LO12_NC:
case R_AARCH64_LDST32_ABS_LO12_NC:
case R_AARCH64_LDST64_ABS_LO12_NC:
case R_AARCH64_LDST128_ABS_LO12_NC:
*place &= ~GENMASK(21, 10);
break;
default:
pr_err("unhandled relocation type %llu\n",
ELF64_R_TYPE(rela->r_info));
BUG();
#else
#error
#endif
}
rela++;
}
}
static void __init unapply_rodata_relocations(void *section, int section_size,
const Elf64_Rela *rela, int numrels)
{
while (numrels--) {
void *place = section + rela->r_offset;
BUG_ON(rela->r_offset >= section_size);
switch (ELF64_R_TYPE(rela->r_info)) {
#ifdef CONFIG_ARM64
case R_AARCH64_ABS64:
*(u64 *)place = 0;
break;
default:
pr_err("unhandled relocation type %llu\n",
ELF64_R_TYPE(rela->r_info));
BUG();
#else
#error
#endif
}
rela++;
}
}
static bool __init check_fips140_module_hmac(void)
{
SHASH_DESC_ON_STACK(desc, dontcare);
u8 digest[SHA256_DIGEST_SIZE];
void *textcopy, *rodatacopy;
int textsize, rodatasize;
int err;
textsize = &__fips140_text_end - &__fips140_text_start;
rodatasize = &__fips140_rodata_end - &__fips140_rodata_start;
pr_warn("text size : 0x%x\n", textsize);
pr_warn("rodata size: 0x%x\n", rodatasize);
textcopy = kmalloc(textsize + rodatasize, GFP_KERNEL);
if (!textcopy) {
pr_err("Failed to allocate memory for copy of .text\n");
return false;
}
rodatacopy = textcopy + textsize;
memcpy(textcopy, __text_start, textsize);
memcpy(rodatacopy, __rodata_start, rodatasize);
// apply the relocations in reverse on the copies of .text and .rodata
unapply_text_relocations(textcopy, textsize,
__this_module.arch.text_relocations,
__this_module.arch.num_text_relocations);
unapply_rodata_relocations(rodatacopy, rodatasize,
__this_module.arch.rodata_relocations,
__this_module.arch.num_rodata_relocations);
kfree(__this_module.arch.text_relocations);
kfree(__this_module.arch.rodata_relocations);
desc->tfm = crypto_alloc_shash("hmac(sha256)", 0, 0);
if (IS_ERR(desc->tfm)) {
pr_err("failed to allocate hmac tfm (%ld)\n", PTR_ERR(desc->tfm));
kfree(textcopy);
return false;
}
pr_warn("using '%s' for integrity check\n",
crypto_shash_driver_name(desc->tfm));
err = crypto_shash_setkey(desc->tfm, fips140_integ_hmac_key,
strlen(fips140_integ_hmac_key)) ?:
crypto_shash_init(desc) ?:
crypto_shash_update(desc, textcopy, textsize) ?:
crypto_shash_finup(desc, rodatacopy, rodatasize, digest);
crypto_free_shash(desc->tfm);
kfree(textcopy);
if (err) {
pr_err("failed to calculate hmac shash (%d)\n", err);
return false;
}
if (memcmp(digest, fips140_integ_hmac_digest, sizeof(digest))) {
pr_err("provided_digest : %*phN\n", (int)sizeof(digest),
fips140_integ_hmac_digest);
pr_err("calculated digest: %*phN\n", (int)sizeof(digest),
digest);
return false;
}
return true;
}
static bool __init update_live_fips140_algos(void)
{
struct crypto_alg *alg, *new_alg, *tmp;
/*
* Find all algorithms that we could not unregister the last time
* around, due to the fact that they were already in use.
*/
down_write(&crypto_alg_sem);
list_for_each_entry_safe(alg, tmp, &unchecked_fips140_algos, cra_list) {
/*
* Take this algo off the list before releasing the lock. This
* ensures that a concurrent invocation of
* crypto_unregister_alg() observes a consistent state, i.e.,
* the algo is still on the list, and crypto_unregister_alg()
* will release it, or it is not, and crypto_unregister_alg()
* will issue a warning but ignore this condition otherwise.
*/
list_del_init(&alg->cra_list);
up_write(&crypto_alg_sem);
/*
* Grab the algo that will replace the live one.
* Note that this will instantiate template based instances as
* well, as long as their driver name uses the conventional
* pattern of "template(algo)". In this case, we are relying on
* the fact that the templates carried by this module will
* supersede the builtin ones, due to the fact that they were
* registered later, and therefore appear first in the linked
* list. For example, "hmac(sha1-ce)" constructed using the
* builtin hmac template and the builtin SHA1 driver will be
* superseded by the integrity checked versions of HMAC and
* SHA1-ce carried in this module.
*
* Note that this takes a reference to the new algorithm which
* will never get released. This is intentional: once we copy
* the function pointers from the new algo into the old one, we
* cannot drop the new algo unless we are sure that the old one
* has been released, and this is someting we don't keep track
* of at the moment.
*/
new_alg = crypto_alg_mod_lookup(alg->cra_driver_name,
alg->cra_flags & CRYPTO_ALG_TYPE_MASK,
CRYPTO_ALG_TYPE_MASK | CRYPTO_NOLOAD);
if (IS_ERR(new_alg)) {
pr_crit("Failed to allocate '%s' for updating live algo (%ld)\n",
alg->cra_driver_name, PTR_ERR(new_alg));
return false;
}
/*
* The FIPS module's algorithms are expected to be built from
* the same source code as the in-kernel ones so that they are
* fully compatible. In general, there's no way to verify full
* compatibility at runtime, but we can at least verify that
* the algorithm properties match.
*/
if (alg->cra_ctxsize != new_alg->cra_ctxsize ||
alg->cra_alignmask != new_alg->cra_alignmask) {
pr_crit("Failed to update live algo '%s' due to mismatch:\n"
"cra_ctxsize : %u vs %u\n"
"cra_alignmask : 0x%x vs 0x%x\n",
alg->cra_driver_name,
alg->cra_ctxsize, new_alg->cra_ctxsize,
alg->cra_alignmask, new_alg->cra_alignmask);
return false;
}
/*
* Update the name and priority so the algorithm stands out as
* one that was updated in order to comply with FIPS140, and
* that it is not the preferred version for further use.
*/
strlcat(alg->cra_name, "+orig", CRYPTO_MAX_ALG_NAME);
alg->cra_priority = 0;
switch (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
struct aead_alg *old_aead, *new_aead;
struct skcipher_alg *old_skcipher, *new_skcipher;
struct shash_alg *old_shash, *new_shash;
struct rng_alg *old_rng, *new_rng;
case CRYPTO_ALG_TYPE_CIPHER:
alg->cra_u.cipher = new_alg->cra_u.cipher;
break;
case CRYPTO_ALG_TYPE_AEAD:
old_aead = container_of(alg, struct aead_alg, base);
new_aead = container_of(new_alg, struct aead_alg, base);
old_aead->setkey = new_aead->setkey;
old_aead->setauthsize = new_aead->setauthsize;
old_aead->encrypt = new_aead->encrypt;
old_aead->decrypt = new_aead->decrypt;
old_aead->init = new_aead->init;
old_aead->exit = new_aead->exit;
break;
case CRYPTO_ALG_TYPE_SKCIPHER:
old_skcipher = container_of(alg, struct skcipher_alg, base);
new_skcipher = container_of(new_alg, struct skcipher_alg, base);
old_skcipher->setkey = new_skcipher->setkey;
old_skcipher->encrypt = new_skcipher->encrypt;
old_skcipher->decrypt = new_skcipher->decrypt;
old_skcipher->init = new_skcipher->init;
old_skcipher->exit = new_skcipher->exit;
break;
case CRYPTO_ALG_TYPE_SHASH:
old_shash = container_of(alg, struct shash_alg, base);
new_shash = container_of(new_alg, struct shash_alg, base);
old_shash->init = new_shash->init;
old_shash->update = new_shash->update;
old_shash->final = new_shash->final;
old_shash->finup = new_shash->finup;
old_shash->digest = new_shash->digest;
old_shash->export = new_shash->export;
old_shash->import = new_shash->import;
old_shash->setkey = new_shash->setkey;
old_shash->init_tfm = new_shash->init_tfm;
old_shash->exit_tfm = new_shash->exit_tfm;
break;
case CRYPTO_ALG_TYPE_RNG:
old_rng = container_of(alg, struct rng_alg, base);
new_rng = container_of(new_alg, struct rng_alg, base);
old_rng->generate = new_rng->generate;
old_rng->seed = new_rng->seed;
old_rng->set_ent = new_rng->set_ent;
break;
default:
/*
* This should never happen: every item on the
* fips140_algorithms list should match one of the
* cases above, so if we end up here, something is
* definitely wrong.
*/
pr_crit("Unexpected type %u for algo %s, giving up ...\n",
alg->cra_flags & CRYPTO_ALG_TYPE_MASK,
alg->cra_driver_name);
return false;
}
/*
* Move the algorithm back to the algorithm list, so it is
* visible in /proc/crypto et al.
*/
down_write(&crypto_alg_sem);
list_add_tail(&alg->cra_list, &crypto_alg_list);
}
up_write(&crypto_alg_sem);
return true;
}
static void fips140_sha256(void *p, const u8 *data, unsigned int len, u8 *out,
int *hook_inuse)
{
sha256(data, len, out);
*hook_inuse = 1;
}
static void fips140_aes_expandkey(void *p, struct crypto_aes_ctx *ctx,
const u8 *in_key, unsigned int key_len,
int *err)
{
*err = aes_expandkey(ctx, in_key, key_len);
}
static void fips140_aes_encrypt(void *priv, const struct crypto_aes_ctx *ctx,
u8 *out, const u8 *in, int *hook_inuse)
{
aes_encrypt(ctx, out, in);
*hook_inuse = 1;
}
static void fips140_aes_decrypt(void *priv, const struct crypto_aes_ctx *ctx,
u8 *out, const u8 *in, int *hook_inuse)
{
aes_decrypt(ctx, out, in);
*hook_inuse = 1;
}
static bool update_fips140_library_routines(void)
{
int ret;
ret = register_trace_android_vh_sha256(fips140_sha256, NULL) ?:
register_trace_android_vh_aes_expandkey(fips140_aes_expandkey, NULL) ?:
register_trace_android_vh_aes_encrypt(fips140_aes_encrypt, NULL) ?:
register_trace_android_vh_aes_decrypt(fips140_aes_decrypt, NULL);
return ret == 0;
}
/*
* Initialize the FIPS 140 module.
*
* Note: this routine iterates over the contents of the initcall section, which
* consists of an array of function pointers that was emitted by the linker
* rather than the compiler. This means that these function pointers lack the
* usual CFI stubs that the compiler emits when CFI codegen is enabled. So
* let's disable CFI locally when handling the initcall array, to avoid
* surpises.
*/
int __init __attribute__((__no_sanitize__("cfi"))) fips140_init(void)
{
const u32 *initcall;
pr_info("Loading FIPS 140 module\n");
unregister_existing_fips140_algos();
/* iterate over all init routines present in this module and call them */
for (initcall = __initcall_start + 1;
initcall < &__initcall_end_marker;
initcall++) {
int (*init)(void) = offset_to_ptr(initcall);
init();
}
if (!update_live_fips140_algos())
goto panic;
if (!update_fips140_library_routines())
goto panic;
/*
* Wait until all tasks have at least been scheduled once and preempted
* voluntarily. This ensures that none of the superseded algorithms that
* were already in use will still be live.
*/
synchronize_rcu_tasks();
/* insert self tests here */
/*
* It may seem backward to perform the integrity check last, but this
* is intentional: the check itself uses hmac(sha256) which is one of
* the algorithms that are replaced with versions from this module, and
* the integrity check must use the replacement version.
*/
if (!check_fips140_module_hmac()) {
pr_crit("FIPS 140 integrity check failed -- giving up!\n");
goto panic;
}
pr_info("FIPS 140 integrity check successful\n");
pr_info("FIPS 140 module successfully loaded\n");
return 0;
panic:
panic("FIPS 140 module load failure");
}
module_init(fips140_init);
MODULE_IMPORT_NS(CRYPTO_INTERNAL);
MODULE_LICENSE("GPL v2");
/*
* Crypto-related helper functions, reproduced here so that they will be
* covered by the FIPS 140 integrity check.
*
* Non-cryptographic helper functions such as memcpy() can be excluded from the
* FIPS module, but there is ambiguity about other helper functions like
* __crypto_xor() and crypto_inc() which aren't cryptographic by themselves,
* but are more closely associated with cryptography than e.g. memcpy(). To
* err on the side of caution, we include copies of these in the FIPS module.
*/
void __crypto_xor(u8 *dst, const u8 *src1, const u8 *src2, unsigned int len)
{
while (len >= 8) {
*(u64 *)dst = *(u64 *)src1 ^ *(u64 *)src2;
dst += 8;
src1 += 8;
src2 += 8;
len -= 8;
}
while (len >= 4) {
*(u32 *)dst = *(u32 *)src1 ^ *(u32 *)src2;
dst += 4;
src1 += 4;
src2 += 4;
len -= 4;
}
while (len >= 2) {
*(u16 *)dst = *(u16 *)src1 ^ *(u16 *)src2;
dst += 2;
src1 += 2;
src2 += 2;
len -= 2;
}
while (len--)
*dst++ = *src1++ ^ *src2++;
}
void crypto_inc(u8 *a, unsigned int size)
{
a += size;
while (size--)
if (++*--a)
break;
}

129
crypto/fips140_gen_hmac.c Normal file
View File

@ -0,0 +1,129 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2021 - Google LLC
* Author: Ard Biesheuvel <ardb@google.com>
*
* This is a host tool that is intended to be used to take the HMAC digest of
* the .text and .rodata sections of the fips140.ko module, and store it inside
* the module. The module will perform an integrity selfcheck at module_init()
* time, by recalculating the digest and comparing it with the value calculated
* here.
*
* Note that the peculiar way an HMAC is being used as a digest with a public
* key rather than as a symmetric key signature is mandated by FIPS 140-2.
*/
#include <elf.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#include <openssl/hmac.h>
static Elf64_Ehdr *ehdr;
static Elf64_Shdr *shdr;
static int num_shdr;
static const char *strtab;
static Elf64_Sym *syms;
static int num_syms;
static Elf64_Shdr *find_symtab_section(void)
{
int i;
for (i = 0; i < num_shdr; i++)
if (shdr[i].sh_type == SHT_SYMTAB)
return &shdr[i];
return NULL;
}
static void *get_sym_addr(const char *sym_name)
{
int i;
for (i = 0; i < num_syms; i++)
if (!strcmp(strtab + syms[i].st_name, sym_name))
return (void *)ehdr + shdr[syms[i].st_shndx].sh_offset +
syms[i].st_value;
return NULL;
}
static void hmac_section(HMAC_CTX *hmac, const char *start, const char *end)
{
void *start_addr = get_sym_addr(start);
void *end_addr = get_sym_addr(end);
HMAC_Update(hmac, start_addr, end_addr - start_addr);
}
int main(int argc, char **argv)
{
Elf64_Shdr *symtab_shdr;
const char *hmac_key;
unsigned char *dg;
unsigned int dglen;
struct stat stat;
HMAC_CTX *hmac;
int fd, ret;
if (argc < 2) {
fprintf(stderr, "file argument missing\n");
exit(EXIT_FAILURE);
}
fd = open(argv[1], O_RDWR);
if (fd < 0) {
fprintf(stderr, "failed to open %s\n", argv[1]);
exit(EXIT_FAILURE);
}
ret = fstat(fd, &stat);
if (ret < 0) {
fprintf(stderr, "failed to stat() %s\n", argv[1]);
exit(EXIT_FAILURE);
}
ehdr = mmap(0, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (ehdr == MAP_FAILED) {
fprintf(stderr, "failed to mmap() %s\n", argv[1]);
exit(EXIT_FAILURE);
}
shdr = (void *)ehdr + ehdr->e_shoff;
num_shdr = ehdr->e_shnum;
symtab_shdr = find_symtab_section();
syms = (void *)ehdr + symtab_shdr->sh_offset;
num_syms = symtab_shdr->sh_size / sizeof(Elf64_Sym);
strtab = (void *)ehdr + shdr[symtab_shdr->sh_link].sh_offset;
hmac_key = get_sym_addr("fips140_integ_hmac_key");
if (!hmac_key) {
fprintf(stderr, "failed to locate HMAC key in binary\n");
exit(EXIT_FAILURE);
}
dg = get_sym_addr("fips140_integ_hmac_digest");
if (!dg) {
fprintf(stderr, "failed to locate HMAC digest in binary\n");
exit(EXIT_FAILURE);
}
hmac = HMAC_CTX_new();
HMAC_Init_ex(hmac, hmac_key, strlen(hmac_key), EVP_sha256(), NULL);
hmac_section(hmac, "__fips140_text_start", "__fips140_text_end");
hmac_section(hmac, "__fips140_rodata_start", "__fips140_rodata_end");
HMAC_Final(hmac, dg, &dglen);
close(fd);
return 0;
}

View File

@ -20,12 +20,24 @@
static const struct crypto_type crypto_shash_type;
int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen)
static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen)
{
return -ENOSYS;
}
EXPORT_SYMBOL_GPL(shash_no_setkey);
/*
* Check whether an shash algorithm has a setkey function.
*
* For CFI compatibility, this must not be an inline function. This is because
* when CFI is enabled, modules won't get the same address for shash_no_setkey
* (if it were exported, which inlining would require) as the core kernel will.
*/
bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
{
return alg->setkey != shash_no_setkey;
}
EXPORT_SYMBOL_GPL(crypto_shash_alg_has_setkey);
static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen)

View File

@ -226,6 +226,7 @@ static const struct acpi_device_id acpi_apd_device_ids[] = {
{ "AMDI0010", APD_ADDR(wt_i2c_desc) },
{ "AMD0020", APD_ADDR(cz_uart_desc) },
{ "AMDI0020", APD_ADDR(cz_uart_desc) },
{ "AMDI0022", APD_ADDR(cz_uart_desc) },
{ "AMD0030", },
{ "AMD0040", APD_ADDR(fch_misc_desc)},
{ "HYGO0010", APD_ADDR(wt_i2c_desc) },

View File

@ -285,6 +285,14 @@ static void acpi_ut_delete_internal_obj(union acpi_operand_object *object)
}
break;
case ACPI_TYPE_LOCAL_ADDRESS_HANDLER:
ACPI_DEBUG_PRINT((ACPI_DB_ALLOCATIONS,
"***** Address handler %p\n", object));
acpi_os_delete_mutex(object->address_space.context_mutex);
break;
default:
break;

View File

@ -5046,7 +5046,7 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
uint32_t enable;
if (copy_from_user(&enable, ubuf, sizeof(enable))) {
ret = -EINVAL;
ret = -EFAULT;
goto err;
}
binder_inner_proc_lock(proc);

View File

@ -15,6 +15,7 @@
#include "../../mm/slab.h"
#include <linux/memblock.h>
#include <linux/page_owner.h>
#include <linux/swap.h>
struct ads_entry {
char *name;
@ -55,6 +56,9 @@ static const struct ads_entry ads_entries[ADS_END] = {
#ifdef CONFIG_SLUB_DEBUG
ADS_ENTRY(ADS_SLUB_DEBUG, &slub_debug),
#endif
#ifdef CONFIG_SWAP
ADS_ENTRY(ADS_NR_SWAP_PAGES, &nr_swap_pages),
#endif
};
/*

View File

@ -57,6 +57,12 @@
#include <trace/hooks/selinux.h>
#include <trace/hooks/hung_task.h>
#include <trace/hooks/mmc_core.h>
#include <trace/hooks/v4l2core.h>
#include <trace/hooks/v4l2mc.h>
#include <trace/hooks/scmi.h>
#include <trace/hooks/user.h>
#include <trace/hooks/cpuidle_psci.h>
#include <trace/hooks/fips140.h>
/*
* Export tracepoints that act as a bare tracehook (ie: have no trace event
@ -65,6 +71,7 @@
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_select_task_rq_fair);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_select_task_rq_rt);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_select_fallback_rq);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_refrigerator);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_scheduler_tick);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_enqueue_task);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_dequeue_task);
@ -83,6 +90,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_set_priority);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_restore_priority);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_wakeup_ilocked);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_send_sig_info);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_process_killed);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rwsem_init);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rwsem_wake);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rwsem_write_finished);
@ -239,6 +247,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_get_unmapped_area_include_reserved_zone)
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_pages_slowpath);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_show_mem);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_print_slabinfo_header);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_shrink_slab);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cache_show);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_typec_tcpci_override_toggling);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_typec_tcpci_chk_contaminant);
@ -299,3 +308,21 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_show_stack_hash);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_save_track_hash);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_task_comm);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpufreq_acct_update_power);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_typec_tcpm_log);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_media_device_setup_link);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_clear_reserved_fmt_fields);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_fill_ext_fmtdesc);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_clear_mask_adjust);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_scmi_timeout_sync);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_find_new_ilb);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_force_compatible_pre);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_force_compatible_post);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_uid);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_user);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_balance_anon_file_reclaim);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpuidle_psci_enter);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cpuidle_psci_exit);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_sha256);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_aes_expandkey);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_aes_encrypt);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_aes_decrypt);

View File

@ -191,6 +191,17 @@ int device_links_read_lock_held(void)
{
return srcu_read_lock_held(&device_links_srcu);
}
static void device_link_synchronize_removal(void)
{
synchronize_srcu(&device_links_srcu);
}
static void device_link_remove_from_lists(struct device_link *link)
{
list_del_rcu(&link->s_node);
list_del_rcu(&link->c_node);
}
#else /* !CONFIG_SRCU */
static DECLARE_RWSEM(device_links_lock);
@ -221,6 +232,16 @@ int device_links_read_lock_held(void)
return lockdep_is_held(&device_links_lock);
}
#endif
static inline void device_link_synchronize_removal(void)
{
}
static void device_link_remove_from_lists(struct device_link *link)
{
list_del(&link->s_node);
list_del(&link->c_node);
}
#endif /* !CONFIG_SRCU */
static bool device_is_ancestor(struct device *dev, struct device *target)
@ -442,8 +463,13 @@ static struct attribute *devlink_attrs[] = {
};
ATTRIBUTE_GROUPS(devlink);
static void device_link_free(struct device_link *link)
static void device_link_release_fn(struct work_struct *work)
{
struct device_link *link = container_of(work, struct device_link, rm_work);
/* Ensure that all references to the link object have been dropped. */
device_link_synchronize_removal();
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put(link->supplier);
@ -452,24 +478,19 @@ static void device_link_free(struct device_link *link)
kfree(link);
}
#ifdef CONFIG_SRCU
static void __device_link_free_srcu(struct rcu_head *rhead)
{
device_link_free(container_of(rhead, struct device_link, rcu_head));
}
static void devlink_dev_release(struct device *dev)
{
struct device_link *link = to_devlink(dev);
call_srcu(&device_links_srcu, &link->rcu_head, __device_link_free_srcu);
INIT_WORK(&link->rm_work, device_link_release_fn);
/*
* It may take a while to complete this work because of the SRCU
* synchronization in device_link_release_fn() and if the consumer or
* supplier devices get deleted when it runs, so put it into the "long"
* workqueue.
*/
queue_work(system_long_wq, &link->rm_work);
}
#else
static void devlink_dev_release(struct device *dev)
{
device_link_free(to_devlink(dev));
}
#endif
static struct class devlink_class = {
.name = "devlink",
@ -843,7 +864,6 @@ struct device_link *device_link_add(struct device *consumer,
}
EXPORT_SYMBOL_GPL(device_link_add);
#ifdef CONFIG_SRCU
static void __device_link_del(struct kref *kref)
{
struct device_link *link = container_of(kref, struct device_link, kref);
@ -853,25 +873,9 @@ static void __device_link_del(struct kref *kref)
pm_runtime_drop_link(link);
list_del_rcu(&link->s_node);
list_del_rcu(&link->c_node);
device_link_remove_from_lists(link);
device_unregister(&link->link_dev);
}
#else /* !CONFIG_SRCU */
static void __device_link_del(struct kref *kref)
{
struct device_link *link = container_of(kref, struct device_link, kref);
dev_info(link->consumer, "Dropping the link to %s\n",
dev_name(link->supplier));
pm_runtime_drop_link(link);
list_del(&link->s_node);
list_del(&link->c_node);
device_unregister(&link->link_dev);
}
#endif /* !CONFIG_SRCU */
static void device_link_put_kref(struct device_link *link)
{

View File

@ -1637,6 +1637,7 @@ void pm_runtime_init(struct device *dev)
dev->power.request_pending = false;
dev->power.request = RPM_REQ_NONE;
dev->power.deferred_resume = false;
dev->power.needs_force_resume = 0;
INIT_WORK(&dev->power.work, pm_runtime_work);
dev->power.timer_expires = 0;
@ -1804,10 +1805,12 @@ int pm_runtime_force_suspend(struct device *dev)
* its parent, but set its status to RPM_SUSPENDED anyway in case this
* function will be called again for it in the meantime.
*/
if (pm_runtime_need_not_resume(dev))
if (pm_runtime_need_not_resume(dev)) {
pm_runtime_set_suspended(dev);
else
} else {
__update_runtime_status(dev, RPM_SUSPENDED);
dev->power.needs_force_resume = 1;
}
return 0;
@ -1834,7 +1837,7 @@ int pm_runtime_force_resume(struct device *dev)
int (*callback)(struct device *);
int ret = 0;
if (!pm_runtime_status_suspended(dev) || pm_runtime_need_not_resume(dev))
if (!pm_runtime_status_suspended(dev) || !dev->power.needs_force_resume)
goto out;
/*
@ -1853,6 +1856,7 @@ int pm_runtime_force_resume(struct device *dev)
pm_runtime_mark_last_busy(dev);
out:
dev->power.needs_force_resume = 0;
pm_runtime_enable(dev);
return ret;
}

View File

@ -866,25 +866,34 @@ EXPORT_SYMBOL_GPL(fwnode_remove_software_node);
/**
* device_add_software_node - Assign software node to a device
* @dev: The device the software node is meant for.
* @swnode: The software node.
* @node: The software node.
*
* This function will register @swnode and make it the secondary firmware node
* pointer of @dev. If @dev has no primary node, then @swnode will become the primary
* node.
* This function will make @node the secondary firmware node pointer of @dev. If
* @dev has no primary node, then @node will become the primary node. The
* function will register @node automatically if it wasn't already registered.
*/
int device_add_software_node(struct device *dev, const struct software_node *swnode)
int device_add_software_node(struct device *dev, const struct software_node *node)
{
struct swnode *swnode;
int ret;
/* Only one software node per device. */
if (dev_to_swnode(dev))
return -EBUSY;
ret = software_node_register(swnode);
if (ret)
return ret;
swnode = software_node_to_swnode(node);
if (swnode) {
kobject_get(&swnode->kobj);
} else {
ret = software_node_register(node);
if (ret)
return ret;
set_secondary_fwnode(dev, software_node_fwnode(swnode));
swnode = software_node_to_swnode(node);
}
set_secondary_fwnode(dev, &swnode->fwnode);
software_node_notify(dev, KOBJ_ADD);
return 0;
}
@ -921,8 +930,8 @@ int software_node_notify(struct device *dev, unsigned long action)
switch (action) {
case KOBJ_ADD:
ret = sysfs_create_link(&dev->kobj, &swnode->kobj,
"software_node");
ret = sysfs_create_link_nowarn(&dev->kobj, &swnode->kobj,
"software_node");
if (ret)
break;

View File

@ -1330,6 +1330,34 @@ static int __maybe_unused sysc_runtime_resume(struct device *dev)
return error;
}
static int sysc_reinit_module(struct sysc *ddata, bool leave_enabled)
{
struct device *dev = ddata->dev;
int error;
/* Disable target module if it is enabled */
if (ddata->enabled) {
error = sysc_runtime_suspend(dev);
if (error)
dev_warn(dev, "reinit suspend failed: %i\n", error);
}
/* Enable target module */
error = sysc_runtime_resume(dev);
if (error)
dev_warn(dev, "reinit resume failed: %i\n", error);
if (leave_enabled)
return error;
/* Disable target module if no leave_enabled was set */
error = sysc_runtime_suspend(dev);
if (error)
dev_warn(dev, "reinit suspend failed: %i\n", error);
return error;
}
static int __maybe_unused sysc_noirq_suspend(struct device *dev)
{
struct sysc *ddata;
@ -1340,12 +1368,18 @@ static int __maybe_unused sysc_noirq_suspend(struct device *dev)
(SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))
return 0;
return pm_runtime_force_suspend(dev);
if (!ddata->enabled)
return 0;
ddata->needs_resume = 1;
return sysc_runtime_suspend(dev);
}
static int __maybe_unused sysc_noirq_resume(struct device *dev)
{
struct sysc *ddata;
int error = 0;
ddata = dev_get_drvdata(dev);
@ -1353,7 +1387,19 @@ static int __maybe_unused sysc_noirq_resume(struct device *dev)
(SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE))
return 0;
return pm_runtime_force_resume(dev);
if (ddata->cfg.quirks & SYSC_QUIRK_REINIT_ON_RESUME) {
error = sysc_reinit_module(ddata, ddata->needs_resume);
if (error)
dev_warn(dev, "noirq_resume failed: %i\n", error);
} else if (ddata->needs_resume) {
error = sysc_runtime_resume(dev);
if (error)
dev_warn(dev, "noirq_resume failed: %i\n", error);
}
ddata->needs_resume = 0;
return error;
}
static const struct dev_pm_ops sysc_pm_ops = {
@ -1404,9 +1450,9 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
/* Uarts on omap4 and later */
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff,
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
/* Quirks that need to be set based on the module address */
SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff,
@ -1462,7 +1508,8 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050,
0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -ENODEV, 0x4ea2080d, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY),
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY |
SYSC_QUIRK_REINIT_ON_RESUME),
SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0,
SYSC_MODULE_QUIRK_WDT),
/* PRUSS on am3, am4 and am5 */

View File

@ -984,6 +984,8 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data)
hdp->hd_phys_address = fixmem32->address;
hdp->hd_address = ioremap(fixmem32->address,
HPET_RANGE_SIZE);
if (!hdp->hd_address)
return AE_ERROR;
if (hpet_is_known(hdp)) {
iounmap(hdp->hd_address);

View File

@ -2,6 +2,7 @@
#include <linux/clk.h>
#include <linux/clocksource.h>
#include <linux/clockchips.h>
#include <linux/cpuhotplug.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/iopoll.h>
@ -630,6 +631,78 @@ static int __init dmtimer_clockevent_init(struct device_node *np)
return error;
}
/* Dmtimer as percpu timer. See dra7 ARM architected timer wrap erratum i940 */
static DEFINE_PER_CPU(struct dmtimer_clockevent, dmtimer_percpu_timer);
static int __init dmtimer_percpu_timer_init(struct device_node *np, int cpu)
{
struct dmtimer_clockevent *clkevt;
int error;
if (!cpu_possible(cpu))
return -EINVAL;
if (!of_property_read_bool(np->parent, "ti,no-reset-on-init") ||
!of_property_read_bool(np->parent, "ti,no-idle"))
pr_warn("Incomplete dtb for percpu dmtimer %pOF\n", np->parent);
clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
error = dmtimer_clkevt_init_common(clkevt, np, CLOCK_EVT_FEAT_ONESHOT,
cpumask_of(cpu), "percpu-dmtimer",
500);
if (error)
return error;
return 0;
}
/* See TRM for timer internal resynch latency */
static int omap_dmtimer_starting_cpu(unsigned int cpu)
{
struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, cpu);
struct clock_event_device *dev = &clkevt->dev;
struct dmtimer_systimer *t = &clkevt->t;
clockevents_config_and_register(dev, t->rate, 3, ULONG_MAX);
irq_force_affinity(dev->irq, cpumask_of(cpu));
return 0;
}
static int __init dmtimer_percpu_timer_startup(void)
{
struct dmtimer_clockevent *clkevt = per_cpu_ptr(&dmtimer_percpu_timer, 0);
struct dmtimer_systimer *t = &clkevt->t;
if (t->sysc) {
cpuhp_setup_state(CPUHP_AP_TI_GP_TIMER_STARTING,
"clockevents/omap/gptimer:starting",
omap_dmtimer_starting_cpu, NULL);
}
return 0;
}
subsys_initcall(dmtimer_percpu_timer_startup);
static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
{
struct device_node *arm_timer;
arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
if (of_device_is_available(arm_timer)) {
pr_warn_once("ARM architected timer wrap issue i940 detected\n");
return 0;
}
if (pa == 0x48034000) /* dra7 dmtimer3 */
return dmtimer_percpu_timer_init(np, 0);
else if (pa == 0x48036000) /* dra7 dmtimer4 */
return dmtimer_percpu_timer_init(np, 1);
return 0;
}
/* Clocksource */
static struct dmtimer_clocksource *
to_dmtimer_clocksource(struct clocksource *cs)
@ -763,6 +836,9 @@ static int __init dmtimer_systimer_init(struct device_node *np)
if (clockevent == pa)
return dmtimer_clockevent_init(np);
if (of_machine_is_compatible("ti,dra7"))
return dmtimer_percpu_quirk_init(np, pa);
return 0;
}

Some files were not shown because too many files have changed in this diff Show More