Merge ad57a1022f ("Merge tag 'exfat-for-5.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat") into android-mainline

Steps on the way to 5.8-rc1.

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I4bc42f572167ea2f815688b4d1eb6124b6d260d4
This commit is contained in:
Greg Kroah-Hartman 2020-06-24 17:54:12 +02:00
commit a253db8915
1402 changed files with 20269 additions and 11612 deletions

View File

@ -486,6 +486,7 @@ What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass /sys/devices/system/cpu/vulnerabilities/spec_store_bypass
/sys/devices/system/cpu/vulnerabilities/l1tf /sys/devices/system/cpu/vulnerabilities/l1tf
/sys/devices/system/cpu/vulnerabilities/mds /sys/devices/system/cpu/vulnerabilities/mds
/sys/devices/system/cpu/vulnerabilities/srbds
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
/sys/devices/system/cpu/vulnerabilities/itlb_multihit /sys/devices/system/cpu/vulnerabilities/itlb_multihit
Date: January 2018 Date: January 2018

View File

@ -1170,6 +1170,13 @@ PAGE_SIZE multiple when read back.
Under certain circumstances, the usage may go over the limit Under certain circumstances, the usage may go over the limit
temporarily. temporarily.
In default configuration regular 0-order allocations always
succeed unless OOM killer chooses current task as a victim.
Some kinds of allocations don't invoke the OOM killer.
Caller could retry them differently, return into userspace
as -ENOMEM or silently ignore in cases like disk readahead.
This is the ultimate protection mechanism. As long as the This is the ultimate protection mechanism. As long as the
high limit is used and monitored properly, this limit's high limit is used and monitored properly, this limit's
utility is limited to providing the final safety net. utility is limited to providing the final safety net.
@ -1226,17 +1233,9 @@ PAGE_SIZE multiple when read back.
The number of time the cgroup's memory usage was The number of time the cgroup's memory usage was
reached the limit and allocation was about to fail. reached the limit and allocation was about to fail.
Depending on context result could be invocation of OOM
killer and retrying allocation or failing allocation.
Failed allocation in its turn could be returned into
userspace as -ENOMEM or silently ignored in cases like
disk readahead. For now OOM in memory cgroup kills
tasks iff shortage has happened inside page fault.
This event is not raised if the OOM killer is not This event is not raised if the OOM killer is not
considered as an option, e.g. for failed high-order considered as an option, e.g. for failed high-order
allocations. allocations or if caller asked to not retry attempts.
oom_kill oom_kill
The number of processes belonging to this cgroup The number of processes belonging to this cgroup

View File

@ -13,6 +13,11 @@ kernel code to obtain additional kernel information. Currently, if
``print_hex_dump_debug()``/``print_hex_dump_bytes()`` calls can be dynamically ``print_hex_dump_debug()``/``print_hex_dump_bytes()`` calls can be dynamically
enabled per-callsite. enabled per-callsite.
If you do not want to enable dynamic debug globally (i.e. in some embedded
system), you may set ``CONFIG_DYNAMIC_DEBUG_CORE`` as basic support of dynamic
debug and add ``ccflags := -DDYNAMIC_DEBUG_MODULE`` into the Makefile of any
modules which you'd like to dynamically debug later.
If ``CONFIG_DYNAMIC_DEBUG`` is not set, ``print_hex_dump_debug()`` is just If ``CONFIG_DYNAMIC_DEBUG`` is not set, ``print_hex_dump_debug()`` is just
shortcut for ``print_hex_dump(KERN_DEBUG)``. shortcut for ``print_hex_dump(KERN_DEBUG)``.

View File

@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
mds mds
tsx_async_abort tsx_async_abort
multihit.rst multihit.rst
special-register-buffer-data-sampling.rst

View File

@ -0,0 +1,149 @@
.. SPDX-License-Identifier: GPL-2.0
SRBDS - Special Register Buffer Data Sampling
=============================================
SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
infer values returned from special register accesses. Special register
accesses are accesses to off core registers. According to Intel's evaluation,
the special register reads that have a security expectation of privacy are
RDRAND, RDSEED and SGX EGETKEY.
When RDRAND, RDSEED and EGETKEY instructions are used, the data is moved
to the core through the special register mechanism that is susceptible
to MDS attacks.
Affected processors
--------------------
Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
be affected.
A processor is affected by SRBDS if its Family_Model and stepping is
in the following list, with the exception of the listed processors
exporting MDS_NO while Intel TSX is available yet not enabled. The
latter class of processors are only affected when Intel TSX is enabled
by software using TSX_CTRL_MSR otherwise they are not affected.
============= ============ ========
common name Family_Model Stepping
============= ============ ========
IvyBridge 06_3AH All
Haswell 06_3CH All
Haswell_L 06_45H All
Haswell_G 06_46H All
Broadwell_G 06_47H All
Broadwell 06_3DH All
Skylake_L 06_4EH All
Skylake 06_5EH All
Kabylake_L 06_8EH <= 0xC
Kabylake 06_9EH <= 0xD
============= ============ ========
Related CVEs
------------
The following CVE entry is related to this SRBDS issue:
============== ===== =====================================
CVE-2020-0543 SRBDS Special Register Buffer Data Sampling
============== ===== =====================================
Attack scenarios
----------------
An unprivileged user can extract values returned from RDRAND and RDSEED
executed on another core or sibling thread using MDS techniques.
Mitigation mechanism
-------------------
Intel will release microcode updates that modify the RDRAND, RDSEED, and
EGETKEY instructions to overwrite secret special register data in the shared
staging buffer before the secret data can be accessed by another logical
processor.
During execution of the RDRAND, RDSEED, or EGETKEY instructions, off-core
accesses from other logical processors will be delayed until the special
register read is complete and the secret data in the shared staging buffer is
overwritten.
This has three effects on performance:
#. RDRAND, RDSEED, or EGETKEY instructions have higher latency.
#. Executing RDRAND at the same time on multiple logical processors will be
serialized, resulting in an overall reduction in the maximum RDRAND
bandwidth.
#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
logical processors that miss their core caches, with an impact similar to
legacy locked cache-line-split accesses.
The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
the mitigation for RDRAND and RDSEED instructions executed outside of Intel
Software Guard Extensions (Intel SGX) enclaves. On logical processors that
disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
take longer to execute and do not impact performance of sibling logical
processors memory accesses. The opt-out mechanism does not affect Intel SGX
enclaves (including execution of RDRAND or RDSEED inside an enclave, as well
as EGETKEY execution).
IA32_MCU_OPT_CTRL MSR Definition
--------------------------------
Along with the mitigation for this issue, Intel added a new thread-scope
IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
9]==1. This MSR is introduced through the microcode update.
Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
enclave on that logical processor. Opting out of the mitigation for a
particular logical processor does not affect the RDRAND and RDSEED mitigations
for other logical processors.
Note that inside of an Intel SGX enclave, the mitigation is applied regardless
of the value of RNGDS_MITG_DS.
Mitigation control on the kernel command line
---------------------------------------------
The kernel command line allows control over the SRBDS mitigation at boot time
with the option "srbds=". The option for this is:
============= =============================================================
off This option disables SRBDS mitigation for RDRAND and RDSEED on
affected platforms.
============= =============================================================
SRBDS System Information
-----------------------
The Linux kernel provides vulnerability status information through sysfs. For
SRBDS this can be accessed by the following sysfs file:
/sys/devices/system/cpu/vulnerabilities/srbds
The possible values contained in this file are:
============================== =============================================
Not affected Processor not vulnerable
Vulnerable Processor vulnerable and mitigation disabled
Vulnerable: No microcode Processor vulnerable and microcode is missing
mitigation
Mitigation: Microcode Processor is vulnerable and mitigation is in
effect.
Mitigation: TSX disabled Processor is only vulnerable when TSX is
enabled while this system was booted with TSX
disabled.
Unknown: Dependent on
hypervisor status Running on virtual guest processor that is
affected but with no way to know if host
processor is mitigated or vulnerable.
============================== =============================================
SRBDS Default mitigation
------------------------
This new microcode serializes processor access during execution of RDRAND,
RDSEED ensures that the shared buffer is overwritten before it is released for
reuse. Use the "srbds=off" kernel command line to disable the mitigation for
RDRAND and RDSEED.

View File

@ -521,6 +521,14 @@ will cause a kdump to occur at the panic() call. In cases where a user wants
to specify this during runtime, /proc/sys/kernel/panic_on_warn can be set to 1 to specify this during runtime, /proc/sys/kernel/panic_on_warn can be set to 1
to achieve the same behaviour. to achieve the same behaviour.
Trigger Kdump on add_taint()
============================
The kernel parameter panic_on_taint facilitates a conditional call to panic()
from within add_taint() whenever the value set in this bitmask matches with the
bit flag being set by add_taint().
This will cause a kdump to occur at the add_taint()->panic() call.
Contact Contact
======= =======

View File

@ -1445,7 +1445,7 @@
hardlockup_all_cpu_backtrace= hardlockup_all_cpu_backtrace=
[KNL] Should the hard-lockup detector generate [KNL] Should the hard-lockup detector generate
backtraces on all cpus. backtraces on all cpus.
Format: <integer> Format: 0 | 1
hashdist= [KNL,NUMA] Large hashes allocated during boot hashdist= [KNL,NUMA] Large hashes allocated during boot
are distributed across NUMA nodes. Defaults on are distributed across NUMA nodes. Defaults on
@ -1513,9 +1513,9 @@
hung_task_panic= hung_task_panic=
[KNL] Should the hung task detector generate panics. [KNL] Should the hung task detector generate panics.
Format: <integer> Format: 0 | 1
A nonzero value instructs the kernel to panic when a A value of 1 instructs the kernel to panic when a
hung task is detected. The default value is controlled hung task is detected. The default value is controlled
by the CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time by the CONFIG_BOOTPARAM_HUNG_TASK_PANIC build-time
option. The value selected by this boot parameter can option. The value selected by this boot parameter can
@ -3447,6 +3447,19 @@
bit 4: print ftrace buffer bit 4: print ftrace buffer
bit 5: print all printk messages in buffer bit 5: print all printk messages in buffer
panic_on_taint= Bitmask for conditionally calling panic() in add_taint()
Format: <hex>[,nousertaint]
Hexadecimal bitmask representing the set of TAINT flags
that will cause the kernel to panic when add_taint() is
called with any of the flags in this set.
The optional switch "nousertaint" can be utilized to
prevent userspace forced crashes by writing to sysctl
/proc/sys/kernel/tainted any flagset matching with the
bitmask set on panic_on_taint.
See Documentation/admin-guide/tainted-kernels.rst for
extra details on the taint flags that users can pick
to compose the bitmask to assign to panic_on_taint.
panic_on_warn panic() instead of WARN(). Useful to cause kdump panic_on_warn panic() instead of WARN(). Useful to cause kdump
on a WARN(). on a WARN().
@ -3715,6 +3728,8 @@
may put more devices in an IOMMU group. may put more devices in an IOMMU group.
force_floating [S390] Force usage of floating interrupts. force_floating [S390] Force usage of floating interrupts.
nomio [S390] Do not use MIO instructions. nomio [S390] Do not use MIO instructions.
norid [S390] ignore the RID field and force use of
one PCI domain per PCI function
pcie_aspm= [PCIE] Forcibly enable or disable PCIe Active State Power pcie_aspm= [PCIE] Forcibly enable or disable PCIe Active State Power
Management. Management.
@ -4652,9 +4667,9 @@
softlockup_panic= softlockup_panic=
[KNL] Should the soft-lockup detector generate panics. [KNL] Should the soft-lockup detector generate panics.
Format: <integer> Format: 0 | 1
A nonzero value instructs the soft-lockup detector A value of 1 instructs the soft-lockup detector
to panic the machine when a soft-lockup occurs. It is to panic the machine when a soft-lockup occurs. It is
also controlled by the kernel.softlockup_panic sysctl also controlled by the kernel.softlockup_panic sysctl
and CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC, which is the and CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC, which is the
@ -4663,7 +4678,7 @@
softlockup_all_cpu_backtrace= softlockup_all_cpu_backtrace=
[KNL] Should the soft-lockup detector generate [KNL] Should the soft-lockup detector generate
backtraces on all cpus. backtraces on all cpus.
Format: <integer> Format: 0 | 1
sonypi.*= [HW] Sony Programmable I/O Control Device driver sonypi.*= [HW] Sony Programmable I/O Control Device driver
See Documentation/admin-guide/laptops/sonypi.rst See Documentation/admin-guide/laptops/sonypi.rst
@ -4822,6 +4837,26 @@
the kernel will oops in either "warn" or "fatal" the kernel will oops in either "warn" or "fatal"
mode. mode.
srbds= [X86,INTEL]
Control the Special Register Buffer Data Sampling
(SRBDS) mitigation.
Certain CPUs are vulnerable to an MDS-like
exploit which can leak bits from the random
number generator.
By default, this issue is mitigated by
microcode. However, the microcode fix can cause
the RDRAND and RDSEED instructions to become
much slower. Among other effects, this will
result in reduced throughput from /dev/urandom.
The microcode mitigation can be disabled with
the following option:
off: Disable mitigation and remove
performance impact to RDRAND and RDSEED
srcutree.counter_wrap_check [KNL] srcutree.counter_wrap_check [KNL]
Specifies how frequently to check for Specifies how frequently to check for
grace-period sequence counter wrap for the grace-period sequence counter wrap for the
@ -4956,6 +4991,15 @@
switches= [HW,M68k] switches= [HW,M68k]
sysctl.*= [KNL]
Set a sysctl parameter, right before loading the init
process, as if the value was written to the respective
/proc/sys/... file. Both '.' and '/' are recognized as
separators. Unrecognized parameters and invalid values
are reported in the kernel log. Sysctls registered
later by a loaded module cannot be set this way.
Example: sysctl.vm.swappiness=40
sysfs.deprecated=0|1 [KNL] sysfs.deprecated=0|1 [KNL]
Enable/disable old style sysfs layout for old udev Enable/disable old style sysfs layout for old udev
on older distributions. When this option is enabled on older distributions. When this option is enabled

View File

@ -364,19 +364,19 @@ follows:
2) for querying the policy, we do not need to take an extra reference on the 2) for querying the policy, we do not need to take an extra reference on the
target task's task policy nor vma policies because we always acquire the target task's task policy nor vma policies because we always acquire the
task's mm's mmap_sem for read during the query. The set_mempolicy() and task's mm's mmap_lock for read during the query. The set_mempolicy() and
mbind() APIs [see below] always acquire the mmap_sem for write when mbind() APIs [see below] always acquire the mmap_lock for write when
installing or replacing task or vma policies. Thus, there is no possibility installing or replacing task or vma policies. Thus, there is no possibility
of a task or thread freeing a policy while another task or thread is of a task or thread freeing a policy while another task or thread is
querying it. querying it.
3) Page allocation usage of task or vma policy occurs in the fault path where 3) Page allocation usage of task or vma policy occurs in the fault path where
we hold them mmap_sem for read. Again, because replacing the task or vma we hold them mmap_lock for read. Again, because replacing the task or vma
policy requires that the mmap_sem be held for write, the policy can't be policy requires that the mmap_lock be held for write, the policy can't be
freed out from under us while we're using it for page allocation. freed out from under us while we're using it for page allocation.
4) Shared policies require special consideration. One task can replace a 4) Shared policies require special consideration. One task can replace a
shared memory policy while another task, with a distinct mmap_sem, is shared memory policy while another task, with a distinct mmap_lock, is
querying or allocating a page based on the policy. To resolve this querying or allocating a page based on the policy. To resolve this
potential race, the shared policy infrastructure adds an extra reference potential race, the shared policy infrastructure adds an extra reference
to the shared policy during lookup while holding a spin lock on the shared to the shared policy during lookup while holding a spin lock on the shared

View File

@ -33,7 +33,7 @@ memory ranges) provides two primary functionalities:
The real advantage of userfaults if compared to regular virtual memory The real advantage of userfaults if compared to regular virtual memory
management of mremap/mprotect is that the userfaults in all their management of mremap/mprotect is that the userfaults in all their
operations never involve heavyweight structures like vmas (in fact the operations never involve heavyweight structures like vmas (in fact the
``userfaultfd`` runtime load never takes the mmap_sem for writing). ``userfaultfd`` runtime load never takes the mmap_lock for writing).
Vmas are not suitable for page- (or hugepage) granular fault tracking Vmas are not suitable for page- (or hugepage) granular fault tracking
when dealing with virtual address spaces that could span when dealing with virtual address spaces that could span

View File

@ -335,6 +335,20 @@ Path for the hotplug policy agent.
Default value is "``/sbin/hotplug``". Default value is "``/sbin/hotplug``".
hung_task_all_cpu_backtrace:
================
If this option is set, the kernel will send an NMI to all CPUs to dump
their backtraces when a hung task is detected. This file shows up if
CONFIG_DETECT_HUNG_TASK and CONFIG_SMP are enabled.
0: Won't show all CPUs backtraces when a hung task is detected.
This is the default behavior.
1: Will non-maskably interrupt all CPUs and dump their backtraces when
a hung task is detected.
hung_task_panic hung_task_panic
=============== ===============
@ -632,6 +646,22 @@ rate for each task.
scanned for a given scan. scanned for a given scan.
oops_all_cpu_backtrace:
================
If this option is set, the kernel will send an NMI to all CPUs to dump
their backtraces when an oops event occurs. It should be used as a last
resort in case a panic cannot be triggered (to protect VMs running, for
example) or kdump can't be collected. This file shows up if CONFIG_SMP
is enabled.
0: Won't show all CPUs backtraces when an oops is detected.
This is the default behavior.
1: Will non-maskably interrupt all CPUs and dump their backtraces when
an oops event is detected.
osrelease, ostype & version osrelease, ostype & version
=========================== ===========================
@ -1239,6 +1269,13 @@ ORed together. The letters are seen in "Tainted" line of Oops reports.
See :doc:`/admin-guide/tainted-kernels` for more information. See :doc:`/admin-guide/tainted-kernels` for more information.
Note:
writes to this sysctl interface will fail with ``EINVAL`` if the kernel is
booted with the command line option ``panic_on_taint=<bitmask>,nousertaint``
and any of the ORed together values being written to ``tainted`` match with
the bitmask declared on panic_on_taint.
See :doc:`/admin-guide/kernel-parameters` for more details on that particular
kernel command line option and its optional ``nousertaint`` switch.
threads-max threads-max
=========== ===========

View File

@ -148,23 +148,46 @@ NOTE: Some pages, such as DAX pages, cannot be pinned with longterm pins. That's
because DAX pages do not have a separate page cache, and so "pinning" implies because DAX pages do not have a separate page cache, and so "pinning" implies
locking down file system blocks, which is not (yet) supported in that way. locking down file system blocks, which is not (yet) supported in that way.
CASE 3: Hardware with page faulting support CASE 3: MMU notifier registration, with or without page faulting hardware
------------------------------------------- -------------------------------------------------------------------------
Here, a well-written driver doesn't normally need to pin pages at all. However, Device drivers can pin pages via get_user_pages*(), and register for mmu
if the driver does choose to do so, it can register MMU notifiers for the range, notifier callbacks for the memory range. Then, upon receiving a notifier
and will be called back upon invalidation. Either way (avoiding page pinning, or "invalidate range" callback , stop the device from using the range, and unpin
using MMU notifiers to unpin upon request), there is proper synchronization with the pages. There may be other possible schemes, such as for example explicitly
both filesystem and mm (page_mkclean(), munmap(), etc). synchronizing against pending IO, that accomplish approximately the same thing.
Therefore, neither flag needs to be set. Or, if the hardware supports replayable page faults, then the device driver can
avoid pinning entirely (this is ideal), as follows: register for mmu notifier
callbacks as above, but instead of stopping the device and unpinning in the
callback, simply remove the range from the device's page tables.
In this case, ideally, neither get_user_pages() nor pin_user_pages() should be Either way, as long as the driver unpins the pages upon mmu notifier callback,
called. Instead, the software should be written so that it does not pin pages. then there is proper synchronization with both filesystem and mm
This allows mm and filesystems to operate more efficiently and reliably. (page_mkclean(), munmap(), etc). Therefore, neither flag needs to be set.
CASE 4: Pinning for struct page manipulation only CASE 4: Pinning for struct page manipulation only
------------------------------------------------- -------------------------------------------------
Here, normal GUP calls are sufficient, so neither flag needs to be set. If only struct page data (as opposed to the actual memory contents that a page
is tracking) is affected, then normal GUP calls are sufficient, and neither flag
needs to be set.
CASE 5: Pinning in order to write to the data within the page
-------------------------------------------------------------
Even though neither DMA nor Direct IO is involved, just a simple case of "pin,
write to a page's data, unpin" can cause a problem. Case 5 may be considered a
superset of Case 1, plus Case 2, plus anything that invokes that pattern. In
other words, if the code is neither Case 1 nor Case 2, it may still require
FOLL_PIN, for patterns like this:
Correct (uses FOLL_PIN calls):
pin_user_pages()
write to the data within the pages
unpin_user_pages()
INCORRECT (uses FOLL_GET calls):
get_user_pages()
write to the data within the pages
put_page()
page_maybe_dma_pinned(): the whole point of pinning page_maybe_dma_pinned(): the whole point of pinning
=================================================== ===================================================

View File

@ -151,6 +151,29 @@ note some tests will require root privileges::
$ cd kselftest $ cd kselftest
$ ./run_kselftest.sh $ ./run_kselftest.sh
Packaging selftests
===================
In some cases packaging is desired, such as when tests need to run on a
different system. To package selftests, run::
$ make -C tools/testing/selftests gen_tar
This generates a tarball in the `INSTALL_PATH/kselftest-packages` directory. By
default, `.gz` format is used. The tar format can be overridden by specifying
a `FORMAT` make variable. Any value recognized by `tar's auto-compress`_ option
is supported, such as::
$ make -C tools/testing/selftests gen_tar FORMAT=.xz
`make gen_tar` invokes `make install` so you can use it to package a subset of
tests by using variables specified in `Running a subset of selftests`_
section::
$ make -C tools/testing/selftests gen_tar TARGETS="bpf" FORMAT=.xz
.. _tar's auto-compress: https://www.gnu.org/software/tar/manual/html_node/gzip.html#auto_002dcompress
Contributing new tests Contributing new tests
====================== ======================

View File

@ -32,15 +32,17 @@ test targets as well. The ``.kunitconfig`` should also contain any other config
options required by the tests. options required by the tests.
A good starting point for a ``.kunitconfig`` is the KUnit defconfig: A good starting point for a ``.kunitconfig`` is the KUnit defconfig:
.. code-block:: bash .. code-block:: bash
cd $PATH_TO_LINUX_REPO cd $PATH_TO_LINUX_REPO
cp arch/um/configs/kunit_defconfig .kunitconfig cp arch/um/configs/kunit_defconfig .kunitconfig
You can then add any other Kconfig options you wish, e.g.: You can then add any other Kconfig options you wish, e.g.:
.. code-block:: none .. code-block:: none
CONFIG_LIST_KUNIT_TEST=y CONFIG_LIST_KUNIT_TEST=y
:doc:`kunit_tool <kunit-tool>` will ensure that all config options set in :doc:`kunit_tool <kunit-tool>` will ensure that all config options set in
``.kunitconfig`` are set in the kernel ``.config`` before running the tests. ``.kunitconfig`` are set in the kernel ``.config`` before running the tests.
@ -54,8 +56,8 @@ using.
other tools (such as make menuconfig) to adjust other config options. other tools (such as make menuconfig) to adjust other config options.
Running the tests Running the tests (KUnit Wrapper)
----------------- ---------------------------------
To make sure that everything is set up correctly, simply invoke the Python To make sure that everything is set up correctly, simply invoke the Python
wrapper from your kernel repo: wrapper from your kernel repo:
@ -105,8 +107,9 @@ have config options ending in ``_KUNIT_TEST``.
KUnit and KUnit tests can be compiled as modules: in this case the tests in a KUnit and KUnit tests can be compiled as modules: in this case the tests in a
module will be run when the module is loaded. module will be run when the module is loaded.
Running the tests
----------------- Running the tests (w/o KUnit Wrapper)
-------------------------------------
Build and run your kernel as usual. Test output will be written to the kernel Build and run your kernel as usual. Test output will be written to the kernel
log in `TAP <https://testanything.org/>`_ format. log in `TAP <https://testanything.org/>`_ format.

View File

@ -595,7 +595,7 @@ able to run one test case per invocation.
KUnit debugfs representation KUnit debugfs representation
============================ ============================
When kunit test suites are initialized, they create an associated directory When kunit test suites are initialized, they create an associated directory
in /sys/kernel/debug/kunit/<test-suite>. The directory contains one file in ``/sys/kernel/debug/kunit/<test-suite>``. The directory contains one file
- results: "cat results" displays results of each test case and the results - results: "cat results" displays results of each test case and the results
of the entire suite for the last test run. of the entire suite for the last test run.
@ -604,4 +604,4 @@ The debugfs representation is primarily of use when kunit test suites are
run in a native environment, either as modules or builtin. Having a way run in a native environment, either as modules or builtin. Having a way
to display results like this is valuable as otherwise results can be to display results like this is valuable as otherwise results can be
intermixed with other events in dmesg output. The maximum size of each intermixed with other events in dmesg output. The maximum size of each
results file is KUNIT_LOG_SIZE bytes (defined in include/kunit/test.h). results file is KUNIT_LOG_SIZE bytes (defined in ``include/kunit/test.h``).

View File

@ -0,0 +1,61 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/iommu/allwinner,sun50i-h6-iommu.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Allwinner H6 IOMMU Device Tree Bindings
maintainers:
- Chen-Yu Tsai <wens@csie.org>
- Maxime Ripard <mripard@kernel.org>
properties:
"#iommu-cells":
const: 1
description:
The content of the cell is the master ID.
compatible:
const: allwinner,sun50i-h6-iommu
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
maxItems: 1
resets:
maxItems: 1
required:
- "#iommu-cells"
- compatible
- reg
- interrupts
- clocks
- resets
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/clock/sun50i-h6-ccu.h>
#include <dt-bindings/reset/sun50i-h6-ccu.h>
iommu: iommu@30f0000 {
compatible = "allwinner,sun50i-h6-iommu";
reg = <0x030f0000 0x10000>;
interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&ccu CLK_BUS_IOMMU>;
resets = <&ccu RST_BUS_IOMMU>;
#iommu-cells = <1>;
};
...

View File

@ -42,7 +42,9 @@ properties:
- const: arm,mmu-500 - const: arm,mmu-500
- const: arm,smmu-v2 - const: arm,smmu-v2
- items: - items:
- const: arm,mmu-401 - enum:
- arm,mmu-400
- arm,mmu-401
- const: arm,smmu-v1 - const: arm,smmu-v1
- enum: - enum:
- arm,smmu-v1 - arm,smmu-v1

View File

@ -0,0 +1,77 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/remoteproc/ingenic,vpu.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Ingenic Video Processing Unit bindings
description:
Inside the Video Processing Unit (VPU) of the recent JZ47xx SoCs from
Ingenic is a second Xburst MIPS CPU very similar to the main core.
This document describes the devicetree bindings for this auxiliary
processor.
maintainers:
- Paul Cercueil <paul@crapouillou.net>
properties:
compatible:
const: ingenic,jz4770-vpu-rproc
reg:
items:
- description: aux registers
- description: tcsm0 registers
- description: tcsm1 registers
- description: sram registers
reg-names:
items:
- const: aux
- const: tcsm0
- const: tcsm1
- const: sram
clocks:
items:
- description: aux clock
- description: vpu clock
clock-names:
items:
- const: aux
- const: vpu
interrupts:
description: VPU hardware interrupt
required:
- compatible
- reg
- reg-names
- clocks
- clock-names
- interrupts
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/jz4770-cgu.h>
vpu: video-decoder@132a0000 {
compatible = "ingenic,jz4770-vpu-rproc";
reg = <0x132a0000 0x20>, /* AUX */
<0x132b0000 0x4000>, /* TCSM0 */
<0x132c0000 0xc000>, /* TCSM1 */
<0x132f0000 0x7000>; /* SRAM */
reg-names = "aux", "tcsm0", "tcsm1", "sram";
clocks = <&cgu JZ4770_CLK_AUX>, <&cgu JZ4770_CLK_VPU>;
clock-names = "aux", "vpu";
interrupt-parent = <&cpuintc>;
interrupts = <3>;
};

View File

@ -15,12 +15,16 @@ on the Qualcomm ADSP Hexagon core.
"qcom,qcs404-adsp-pas" "qcom,qcs404-adsp-pas"
"qcom,qcs404-cdsp-pas" "qcom,qcs404-cdsp-pas"
"qcom,qcs404-wcss-pas" "qcom,qcs404-wcss-pas"
"qcom,sc7180-mpss-pas"
"qcom,sdm845-adsp-pas" "qcom,sdm845-adsp-pas"
"qcom,sdm845-cdsp-pas" "qcom,sdm845-cdsp-pas"
"qcom,sm8150-adsp-pas" "qcom,sm8150-adsp-pas"
"qcom,sm8150-cdsp-pas" "qcom,sm8150-cdsp-pas"
"qcom,sm8150-mpss-pas" "qcom,sm8150-mpss-pas"
"qcom,sm8150-slpi-pas" "qcom,sm8150-slpi-pas"
"qcom,sm8250-adsp-pas"
"qcom,sm8250-cdsp-pas"
"qcom,sm8250-slpi-pas"
- interrupts-extended: - interrupts-extended:
Usage: required Usage: required
@ -44,8 +48,12 @@ on the Qualcomm ADSP Hexagon core.
qcom,sm8150-adsp-pas: qcom,sm8150-adsp-pas:
qcom,sm8150-cdsp-pas: qcom,sm8150-cdsp-pas:
qcom,sm8150-slpi-pas: qcom,sm8150-slpi-pas:
qcom,sm8250-adsp-pas:
qcom,sm8250-cdsp-pas:
qcom,sm8250-slpi-pas:
must be "wdog", "fatal", "ready", "handover", "stop-ack" must be "wdog", "fatal", "ready", "handover", "stop-ack"
qcom,qcs404-wcss-pas: qcom,qcs404-wcss-pas:
qcom,sc7180-mpss-pas:
qcom,sm8150-mpss-pas: qcom,sm8150-mpss-pas:
must be "wdog", "fatal", "ready", "handover", "stop-ack", must be "wdog", "fatal", "ready", "handover", "stop-ack",
"shutdown-ack" "shutdown-ack"
@ -105,10 +113,14 @@ on the Qualcomm ADSP Hexagon core.
qcom,sdm845-cdsp-pas: qcom,sdm845-cdsp-pas:
qcom,sm8150-adsp-pas: qcom,sm8150-adsp-pas:
qcom,sm8150-cdsp-pas: qcom,sm8150-cdsp-pas:
qcom,sm8250-cdsp-pas:
must be "cx", "load_state" must be "cx", "load_state"
qcom,sc7180-mpss-pas:
qcom,sm8150-mpss-pas: qcom,sm8150-mpss-pas:
must be "cx", "load_state", "mss" must be "cx", "load_state", "mss"
qcom,sm8250-adsp-pas:
qcom,sm8150-slpi-pas: qcom,sm8150-slpi-pas:
qcom,sm8250-slpi-pas:
must be "lcx", "lmx", "load_state" must be "lcx", "lmx", "load_state"
- memory-region: - memory-region:

View File

@ -79,7 +79,7 @@ on the Qualcomm Hexagon core.
"snoc_axi", "mnoc_axi", "qdss" "snoc_axi", "mnoc_axi", "qdss"
qcom,sc7180-mss-pil: qcom,sc7180-mss-pil:
must be "iface", "bus", "xo", "snoc_axi", "mnoc_axi", must be "iface", "bus", "xo", "snoc_axi", "mnoc_axi",
"mss_crypto", "mss_nav", "nav" "nav"
qcom,sdm845-mss-pil: qcom,sdm845-mss-pil:
must be "iface", "bus", "mem", "xo", "gpll0_mss", must be "iface", "bus", "mem", "xo", "gpll0_mss",
"snoc_axi", "mnoc_axi", "prng" "snoc_axi", "mnoc_axi", "prng"
@ -102,6 +102,14 @@ on the Qualcomm Hexagon core.
must be "mss_restart", "pdc_reset" for the modem must be "mss_restart", "pdc_reset" for the modem
sub-system on SC7180, SDM845 SoCs sub-system on SC7180, SDM845 SoCs
For devices where the mba and mpss sub-nodes are not specified, mba/mpss region
should be referenced as follows:
- memory-region:
Usage: required
Value type: <phandle>
Definition: reference to the reserved-memory for the mba region followed
by the mpss region
For the compatible strings below the following supplies are required: For the compatible strings below the following supplies are required:
"qcom,q6v5-pil" "qcom,q6v5-pil"
"qcom,msm8916-mss-pil", "qcom,msm8916-mss-pil",
@ -173,16 +181,15 @@ For the compatible string below the following supplies are required:
For the compatible strings below the following phandle references are required: For the compatible strings below the following phandle references are required:
"qcom,sc7180-mss-pil" "qcom,sc7180-mss-pil"
- qcom,halt-nav-regs: - qcom,spare-regs:
Usage: required Usage: required
Value type: <prop-encoded-array> Value type: <prop-encoded-array>
Definition: reference to a list of 2 phandles with one offset each for Definition: a phandle reference to a syscon representing TCSR followed
the modem sub-system running on SC7180 SoC. The first by the offset within syscon for conn_box_spare0 register
phandle reference is to the mss clock node followed by the used by the modem sub-system running on SC7180 SoC.
offset within register space for nav halt register. The
second phandle reference is to a syscon representing TCSR The Hexagon node must contain iommus property as described in ../iommu/iommu.txt
followed by the offset within syscon for conn_box_spare0 on platforms which do not have TrustZone.
register.
= SUBNODES: = SUBNODES:
The Hexagon node must contain two subnodes, named "mba" and "mpss" representing The Hexagon node must contain two subnodes, named "mba" and "mpss" representing

View File

@ -1,5 +1,8 @@
Glock internal locking rules .. SPDX-License-Identifier: GPL-2.0
------------------------------
============================
Glock internal locking rules
============================
This documents the basic principles of the glock state machine This documents the basic principles of the glock state machine
internals. Each glock (struct gfs2_glock in fs/gfs2/incore.h) internals. Each glock (struct gfs2_glock in fs/gfs2/incore.h)
@ -24,24 +27,28 @@ There are three lock states that users of the glock layer can request,
namely shared (SH), deferred (DF) and exclusive (EX). Those translate namely shared (SH), deferred (DF) and exclusive (EX). Those translate
to the following DLM lock modes: to the following DLM lock modes:
Glock mode | DLM lock mode ========== ====== =====================================================
------------------------------ Glock mode DLM lock mode
UN | IV/NL Unlocked (no DLM lock associated with glock) or NL ========== ====== =====================================================
SH | PR (Protected read) UN IV/NL Unlocked (no DLM lock associated with glock) or NL
DF | CW (Concurrent write) SH PR (Protected read)
EX | EX (Exclusive) DF CW (Concurrent write)
EX EX (Exclusive)
========== ====== =====================================================
Thus DF is basically a shared mode which is incompatible with the "normal" Thus DF is basically a shared mode which is incompatible with the "normal"
shared lock mode, SH. In GFS2 the DF mode is used exclusively for direct I/O shared lock mode, SH. In GFS2 the DF mode is used exclusively for direct I/O
operations. The glocks are basically a lock plus some routines which deal operations. The glocks are basically a lock plus some routines which deal
with cache management. The following rules apply for the cache: with cache management. The following rules apply for the cache:
Glock mode | Cache data | Cache Metadata | Dirty Data | Dirty Metadata ========== ========== ============== ========== ==============
-------------------------------------------------------------------------- Glock mode Cache data Cache Metadata Dirty Data Dirty Metadata
UN | No | No | No | No ========== ========== ============== ========== ==============
SH | Yes | Yes | No | No UN No No No No
DF | No | Yes | No | No SH Yes Yes No No
EX | Yes | Yes | Yes | Yes DF No Yes No No
EX Yes Yes Yes Yes
========== ========== ============== ========== ==============
These rules are implemented using the various glock operations which These rules are implemented using the various glock operations which
are defined for each type of glock. Not all types of glocks use are defined for each type of glock. Not all types of glocks use
@ -49,21 +56,23 @@ all the modes. Only inode glocks use the DF mode for example.
Table of glock operations and per type constants: Table of glock operations and per type constants:
Field | Purpose ============= =============================================================
---------------------------------------------------------------------------- Field Purpose
go_xmote_th | Called before remote state change (e.g. to sync dirty data) ============= =============================================================
go_xmote_bh | Called after remote state change (e.g. to refill cache) go_xmote_th Called before remote state change (e.g. to sync dirty data)
go_inval | Called if remote state change requires invalidating the cache go_xmote_bh Called after remote state change (e.g. to refill cache)
go_demote_ok | Returns boolean value of whether its ok to demote a glock go_inval Called if remote state change requires invalidating the cache
| (e.g. checks timeout, and that there is no cached data) go_demote_ok Returns boolean value of whether its ok to demote a glock
go_lock | Called for the first local holder of a lock (e.g. checks timeout, and that there is no cached data)
go_unlock | Called on the final local unlock of a lock go_lock Called for the first local holder of a lock
go_dump | Called to print content of object for debugfs file, or on go_unlock Called on the final local unlock of a lock
| error to dump glock to the log. go_dump Called to print content of object for debugfs file, or on
go_type | The type of the glock, LM_TYPE_..... error to dump glock to the log.
go_callback | Called if the DLM sends a callback to drop this lock go_type The type of the glock, ``LM_TYPE_*``
go_flags | GLOF_ASPACE is set, if the glock has an address space go_callback Called if the DLM sends a callback to drop this lock
| associated with it go_flags GLOF_ASPACE is set, if the glock has an address space
associated with it
============= =============================================================
The minimum hold time for each lock is the time after a remote lock The minimum hold time for each lock is the time after a remote lock
grant for which we ignore remote demote requests. This is in order to grant for which we ignore remote demote requests. This is in order to
@ -82,21 +91,25 @@ rather than via the glock.
Locking rules for glock operations: Locking rules for glock operations:
Operation | GLF_LOCK bit lock held | gl_lockref.lock spinlock held ============= ====================== =============================
------------------------------------------------------------------------- Operation GLF_LOCK bit lock held gl_lockref.lock spinlock held
go_xmote_th | Yes | No ============= ====================== =============================
go_xmote_bh | Yes | No go_xmote_th Yes No
go_inval | Yes | No go_xmote_bh Yes No
go_demote_ok | Sometimes | Yes go_inval Yes No
go_lock | Yes | No go_demote_ok Sometimes Yes
go_unlock | Yes | No go_lock Yes No
go_dump | Sometimes | Yes go_unlock Yes No
go_callback | Sometimes (N/A) | Yes go_dump Sometimes Yes
go_callback Sometimes (N/A) Yes
============= ====================== =============================
N.B. Operations must not drop either the bit lock or the spinlock .. Note::
if its held on entry. go_dump and do_demote_ok must never block.
Note that go_dump will only be called if the glock's state Operations must not drop either the bit lock or the spinlock
indicates that it is caching uptodate data. if its held on entry. go_dump and do_demote_ok must never block.
Note that go_dump will only be called if the glock's state
indicates that it is caching uptodate data.
Glock locking order within GFS2: Glock locking order within GFS2:
@ -104,7 +117,7 @@ Glock locking order within GFS2:
2. Rename glock (for rename only) 2. Rename glock (for rename only)
3. Inode glock(s) 3. Inode glock(s)
(Parents before children, inodes at "same level" with same parent in (Parents before children, inodes at "same level" with same parent in
lock number order) lock number order)
4. Rgrp glock(s) (for (de)allocation operations) 4. Rgrp glock(s) (for (de)allocation operations)
5. Transaction glock (via gfs2_trans_begin) for non-read operations 5. Transaction glock (via gfs2_trans_begin) for non-read operations
6. i_rw_mutex (if required) 6. i_rw_mutex (if required)
@ -117,8 +130,8 @@ determine the lifetime of the inode in question. Locking of inodes
is on a per-inode basis. Locking of rgrps is on a per rgrp basis. is on a per-inode basis. Locking of rgrps is on a per rgrp basis.
In general we prefer to lock local locks prior to cluster locks. In general we prefer to lock local locks prior to cluster locks.
Glock Statistics Glock Statistics
------------------ ----------------
The stats are divided into two sets: those relating to the The stats are divided into two sets: those relating to the
super block and those relating to an individual glock. The super block and those relating to an individual glock. The
@ -173,8 +186,8 @@ we'd like to get a better idea of these timings:
1. To be able to better set the glock "min hold time" 1. To be able to better set the glock "min hold time"
2. To spot performance issues more easily 2. To spot performance issues more easily
3. To improve the algorithm for selecting resource groups for 3. To improve the algorithm for selecting resource groups for
allocation (to base it on lock wait time, rather than blindly allocation (to base it on lock wait time, rather than blindly
using a "try lock") using a "try lock")
Due to the smoothing action of the updates, a step change in Due to the smoothing action of the updates, a step change in
some input quantity being sampled will only fully be taken some input quantity being sampled will only fully be taken
@ -195,10 +208,13 @@ as possible. There are always inaccuracies in any
measuring system, but I hope this is as accurate as we measuring system, but I hope this is as accurate as we
can reasonably make it. can reasonably make it.
Per sb stats can be found here: Per sb stats can be found here::
/sys/kernel/debug/gfs2/<fsname>/sbstats
Per glock stats can be found here: /sys/kernel/debug/gfs2/<fsname>/sbstats
/sys/kernel/debug/gfs2/<fsname>/glstats
Per glock stats can be found here::
/sys/kernel/debug/gfs2/<fsname>/glstats
Assuming that debugfs is mounted on /sys/kernel/debug and also Assuming that debugfs is mounted on /sys/kernel/debug and also
that <fsname> is replaced with the name of the gfs2 filesystem that <fsname> is replaced with the name of the gfs2 filesystem
@ -206,14 +222,16 @@ in question.
The abbreviations used in the output as are follows: The abbreviations used in the output as are follows:
srtt - Smoothed round trip time for non-blocking dlm requests ========= ================================================================
srttvar - Variance estimate for srtt srtt Smoothed round trip time for non blocking dlm requests
srttb - Smoothed round trip time for (potentially) blocking dlm requests srttvar Variance estimate for srtt
srttvarb - Variance estimate for srttb srttb Smoothed round trip time for (potentially) blocking dlm requests
sirt - Smoothed inter-request time (for dlm requests) srttvarb Variance estimate for srttb
sirtvar - Variance estimate for sirt sirt Smoothed inter request time (for dlm requests)
dlm - Number of dlm requests made (dcnt in glstats file) sirtvar Variance estimate for sirt
queue - Number of glock requests queued (qcnt in glstats file) dlm Number of dlm requests made (dcnt in glstats file)
queue Number of glock requests queued (qcnt in glstats file)
========= ================================================================
The sbstats file contains a set of these stats for each glock type (so 8 lines The sbstats file contains a set of these stats for each glock type (so 8 lines
for each type) and for each cpu (one column per cpu). The glstats file contains for each type) and for each cpu (one column per cpu). The glstats file contains
@ -224,9 +242,12 @@ The gfs2_glock_lock_time tracepoint prints out the current values of the stats
for the glock in question, along with some addition information on each dlm for the glock in question, along with some addition information on each dlm
reply that is received: reply that is received:
status - The status of the dlm request ====== =======================================
flags - The dlm request flags status The status of the dlm request
tdiff - The time taken by this specific request flags The dlm request flags
tdiff The time taken by this specific request
====== =======================================
(remaining fields as per above list) (remaining fields as per above list)

View File

@ -88,6 +88,7 @@ Documentation for filesystem implementations.
f2fs f2fs
gfs2 gfs2
gfs2-uevents gfs2-uevents
gfs2-glocks
hfs hfs
hfsplus hfsplus
hpfs hpfs

View File

@ -615,7 +615,7 @@ prototypes::
locking rules: locking rules:
============= ======== =========================== ============= ======== ===========================
ops mmap_sem PageLocked(page) ops mmap_lock PageLocked(page)
============= ======== =========================== ============= ======== ===========================
open: yes open: yes
close: yes close: yes

View File

@ -15,6 +15,7 @@ s390 Architecture
vfio-ccw vfio-ccw
zfcpdump zfcpdump
common_io common_io
pci
text_files text_files

125
Documentation/s390/pci.rst Normal file
View File

@ -0,0 +1,125 @@
.. SPDX-License-Identifier: GPL-2.0
=========
S/390 PCI
=========
Authors:
- Pierre Morel
Copyright, IBM Corp. 2020
Command line parameters and debugfs entries
===========================================
Command line parameters
-----------------------
* nomio
Do not use PCI Mapped I/O (MIO) instructions.
* norid
Ignore the RID field and force use of one PCI domain per PCI function.
debugfs entries
---------------
The S/390 debug feature (s390dbf) generates views to hold various debug results in sysfs directories of the form:
* /sys/kernel/debug/s390dbf/pci_*/
For example:
- /sys/kernel/debug/s390dbf/pci_msg/sprintf
Holds messages from the processing of PCI events, like machine check handling
and setting of global functionality, like UID checking.
Change the level of logging to be more or less verbose by piping
a number between 0 and 6 to /sys/kernel/debug/s390dbf/pci_*/level. For
details, see the documentation on the S/390 debug feature at
Documentation/s390/s390dbf.rst.
Sysfs entries
=============
Entries specific to zPCI functions and entries that hold zPCI information.
* /sys/bus/pci/slots/XXXXXXXX
The slot entries are set up using the function identifier (FID) of the
PCI function.
- /sys/bus/pci/slots/XXXXXXXX/power
A physical function that currently supports a virtual function cannot be
powered off until all virtual functions are removed with:
echo 0 > /sys/bus/pci/devices/XXXX:XX:XX.X/sriov_numvf
* /sys/bus/pci/devices/XXXX:XX:XX.X/
- function_id
A zPCI function identifier that uniquely identifies the function in the Z server.
- function_handle
Low-level identifier used for a configured PCI function.
It might be useful for debuging.
- pchid
Model-dependent location of the I/O adapter.
- pfgid
PCI function group ID, functions that share identical functionality
use a common identifier.
A PCI group defines interrupts, IOMMU, IOTLB, and DMA specifics.
- vfn
The virtual function number, from 1 to N for virtual functions,
0 for physical functions.
- pft
The PCI function type
- port
The port corresponds to the physical port the function is attached to.
It also gives an indication of the physical function a virtual function
is attached to.
- uid
The unique identifier (UID) is defined when configuring an LPAR and is
unique in the LPAR.
- pfip/segmentX
The segments determine the isolation of a function.
They correspond to the physical path to the function.
The more the segments are different, the more the functions are isolated.
Enumeration and hotplug
=======================
The PCI address consists of four parts: domain, bus, device and function,
and is of this form: DDDD:BB:dd.f
* When not using multi-functions (norid is set, or the firmware does not
support multi-functions):
- There is only one function per domain.
- The domain is set from the zPCI function's UID as defined during the
LPAR creation.
* When using multi-functions (norid parameter is not set),
zPCI functions are addressed differently:
- There is still only one bus per domain.
- There can be up to 256 functions per bus.
- The domain part of the address of all functions for
a multi-Function device is set from the zPCI function's UID as defined
in the LPAR creation for the function zero.
- New functions will only be ready for use after the function zero
(the function with devfn 0) has been enumerated.

View File

@ -204,15 +204,44 @@ definition of the region is::
__u32 ret_code; __u32 ret_code;
} __packed; } __packed;
This region is always available.
While starting an I/O request, orb_area should be filled with the While starting an I/O request, orb_area should be filled with the
guest ORB, and scsw_area should be filled with the SCSW of the Virtual guest ORB, and scsw_area should be filled with the SCSW of the Virtual
Subchannel. Subchannel.
irb_area stores the I/O result. irb_area stores the I/O result.
ret_code stores a return code for each access of the region. ret_code stores a return code for each access of the region. The following
values may occur:
``0``
The operation was successful.
``-EOPNOTSUPP``
The orb specified transport mode or an unidentified IDAW format, or the
scsw specified a function other than the start function.
``-EIO``
A request was issued while the device was not in a state ready to accept
requests, or an internal error occurred.
``-EBUSY``
The subchannel was status pending or busy, or a request is already active.
``-EAGAIN``
A request was being processed, and the caller should retry.
``-EACCES``
The channel path(s) used for the I/O were found to be not operational.
``-ENODEV``
The device was found to be not operational.
``-EINVAL``
The orb specified a chain longer than 255 ccws, or an internal error
occurred.
This region is always available.
vfio-ccw cmd region vfio-ccw cmd region
------------------- -------------------
@ -231,6 +260,64 @@ This region is exposed via region type VFIO_REGION_SUBTYPE_CCW_ASYNC_CMD.
Currently, CLEAR SUBCHANNEL and HALT SUBCHANNEL use this region. Currently, CLEAR SUBCHANNEL and HALT SUBCHANNEL use this region.
command specifies the command to be issued; ret_code stores a return code
for each access of the region. The following values may occur:
``0``
The operation was successful.
``-ENODEV``
The device was found to be not operational.
``-EINVAL``
A command other than halt or clear was specified.
``-EIO``
A request was issued while the device was not in a state ready to accept
requests.
``-EAGAIN``
A request was being processed, and the caller should retry.
``-EBUSY``
The subchannel was status pending or busy while processing a halt request.
vfio-ccw schib region
---------------------
The vfio-ccw schib region is used to return Subchannel-Information
Block (SCHIB) data to userspace::
struct ccw_schib_region {
#define SCHIB_AREA_SIZE 52
__u8 schib_area[SCHIB_AREA_SIZE];
} __packed;
This region is exposed via region type VFIO_REGION_SUBTYPE_CCW_SCHIB.
Reading this region triggers a STORE SUBCHANNEL to be issued to the
associated hardware.
vfio-ccw crw region
---------------------
The vfio-ccw crw region is used to return Channel Report Word (CRW)
data to userspace::
struct ccw_crw_region {
__u32 crw;
__u32 pad;
} __packed;
This region is exposed via region type VFIO_REGION_SUBTYPE_CCW_CRW.
Reading this region returns a CRW if one that is relevant for this
subchannel (e.g. one reporting changes in channel path state) is
pending, or all zeroes if not. If multiple CRWs are pending (including
possibly chained CRWs), reading this region again will return the next
one, until no more CRWs are pending and zeroes are returned. This is
similar to how STORE CHANNEL REPORT WORD works.
vfio-ccw operation details vfio-ccw operation details
-------------------------- --------------------------
@ -333,7 +420,14 @@ through DASD/ECKD device online in a guest now and use it as a block
device. device.
The current code allows the guest to start channel programs via The current code allows the guest to start channel programs via
START SUBCHANNEL, and to issue HALT SUBCHANNEL and CLEAR SUBCHANNEL. START SUBCHANNEL, and to issue HALT SUBCHANNEL, CLEAR SUBCHANNEL,
and STORE SUBCHANNEL.
Currently all channel programs are prefetched, regardless of the
p-bit setting in the ORB. As a result, self modifying channel
programs are not supported. For this reason, IPL has to be handled as
a special case by a userspace/guest program; this has been implemented
in QEMU's s390-ccw bios as of QEMU 4.1.
vfio-ccw supports classic (command mode) channel I/O only. Transport vfio-ccw supports classic (command mode) channel I/O only. Transport
mode (HPF) is not supported. mode (HPF) is not supported.

View File

@ -46,5 +46,5 @@ initramfs with a user space application that writes the dump to a SCSI
partition. partition.
For more information on how to use zfcpdump refer to the s390 'Using the Dump For more information on how to use zfcpdump refer to the s390 'Using the Dump
Tools book', which is available from Tools' book, which is available from IBM Knowledge Center:
http://www.ibm.com/developerworks/linux/linux390. https://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_dt.html

File diff suppressed because it is too large Load Diff

View File

@ -191,15 +191,15 @@ The usage pattern is::
again: again:
range.notifier_seq = mmu_interval_read_begin(&interval_sub); range.notifier_seq = mmu_interval_read_begin(&interval_sub);
down_read(&mm->mmap_sem); mmap_read_lock(mm);
ret = hmm_range_fault(&range); ret = hmm_range_fault(&range);
if (ret) { if (ret) {
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
if (ret == -EBUSY) if (ret == -EBUSY)
goto again; goto again;
return ret; return ret;
} }
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
take_lock(driver->update); take_lock(driver->update);
if (mmu_interval_read_retry(&ni, range.notifier_seq) { if (mmu_interval_read_retry(&ni, range.notifier_seq) {

View File

@ -98,9 +98,9 @@ split_huge_page() or split_huge_pmd() has a cost.
To make pagetable walks huge pmd aware, all you need to do is to call To make pagetable walks huge pmd aware, all you need to do is to call
pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
mmap_sem in read (or write) mode to be sure a huge pmd cannot be mmap_lock in read (or write) mode to be sure a huge pmd cannot be
created from under you by khugepaged (khugepaged collapse_huge_page created from under you by khugepaged (khugepaged collapse_huge_page
takes the mmap_sem in write mode in addition to the anon_vma lock). If takes the mmap_lock in write mode in addition to the anon_vma lock). If
pmd_trans_huge returns false, you just fallback in the old code pmd_trans_huge returns false, you just fallback in the old code
paths. If instead pmd_trans_huge returns true, you have to take the paths. If instead pmd_trans_huge returns true, you have to take the
page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the

View File

@ -7251,7 +7251,7 @@ L: cluster-devel@redhat.com
S: Supported S: Supported
W: http://sources.redhat.com/cluster/ W: http://sources.redhat.com/cluster/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git
F: Documentation/filesystems/gfs2*.txt F: Documentation/filesystems/gfs2*
F: fs/gfs2/ F: fs/gfs2/
F: include/uapi/linux/gfs2_ondisk.h F: include/uapi/linux/gfs2_ondisk.h
@ -8502,6 +8502,7 @@ F: drivers/mtd/nand/raw/ingenic/
F: drivers/pinctrl/pinctrl-ingenic.c F: drivers/pinctrl/pinctrl-ingenic.c
F: drivers/power/supply/ingenic-battery.c F: drivers/power/supply/ingenic-battery.c
F: drivers/pwm/pwm-jz4740.c F: drivers/pwm/pwm-jz4740.c
F: drivers/remoteproc/ingenic_rproc.c
F: drivers/rtc/rtc-jz4740.c F: drivers/rtc/rtc-jz4740.c
F: drivers/tty/serial/8250/8250_ingenic.c F: drivers/tty/serial/8250/8250_ingenic.c
F: drivers/usb/musb/jz4740.c F: drivers/usb/musb/jz4740.c
@ -14840,6 +14841,7 @@ S: Supported
W: http://www.ibm.com/developerworks/linux/linux390/ W: http://www.ibm.com/developerworks/linux/linux390/
F: arch/s390/pci/ F: arch/s390/pci/
F: drivers/pci/hotplug/s390_pci_hpc.c F: drivers/pci/hotplug/s390_pci_hpc.c
F: Documentation/s390/pci.rst
S390 VFIO AP DRIVER S390 VFIO AP DRIVER
M: Tony Krowiak <akrowiak@linux.ibm.com> M: Tony Krowiak <akrowiak@linux.ibm.com>

View File

@ -16,7 +16,6 @@
#include <asm/console.h> #include <asm/console.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/pgtable.h>
#include <asm/io.h> #include <asm/io.h>
#include <stdarg.h> #include <stdarg.h>

View File

@ -18,7 +18,6 @@
#include <asm/console.h> #include <asm/console.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/pgtable.h>
#include <asm/io.h> #include <asm/io.h>
#include <stdarg.h> #include <stdarg.h>

View File

@ -14,7 +14,6 @@
#include <asm/console.h> #include <asm/console.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/pgtable.h>
#include <stdarg.h> #include <stdarg.h>

View File

@ -4,19 +4,6 @@
#include <linux/mm.h> #include <linux/mm.h>
/* Caches aren't brain-dead on the Alpha. */
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
#define flush_dcache_page(page) do { } while (0)
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
/* Note that the following two definitions are _highly_ dependent /* Note that the following two definitions are _highly_ dependent
on the contexts in which they are used in the kernel. I personally on the contexts in which they are used in the kernel. I personally
think it is criminal how loosely defined these macros are. */ think it is criminal how loosely defined these macros are. */
@ -48,7 +35,7 @@ extern void smp_imb(void);
extern void __load_new_mm_context(struct mm_struct *); extern void __load_new_mm_context(struct mm_struct *);
static inline void static inline void
flush_icache_user_range(struct vm_area_struct *vma, struct page *page, flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long addr, int len) unsigned long addr, int len)
{ {
if (vma->vm_flags & VM_EXEC) { if (vma->vm_flags & VM_EXEC) {
@ -59,20 +46,17 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
mm->context[smp_processor_id()] = 0; mm->context[smp_processor_id()] = 0;
} }
} }
#else #define flush_icache_user_page flush_icache_user_page
extern void flush_icache_user_range(struct vm_area_struct *vma, #else /* CONFIG_SMP */
extern void flush_icache_user_page(struct vm_area_struct *vma,
struct page *page, unsigned long addr, int len); struct page *page, unsigned long addr, int len);
#endif #define flush_icache_user_page flush_icache_user_page
#endif /* CONFIG_SMP */
/* This is used only in __do_fault and do_swap_page. */ /* This is used only in __do_fault and do_swap_page. */
#define flush_icache_page(vma, page) \ #define flush_icache_page(vma, page) \
flush_icache_user_range((vma), (page), 0, 0) flush_icache_user_page((vma), (page), 0, 0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \ #include <asm-generic/cacheflush.h>
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* _ALPHA_CACHEFLUSH_H */ #endif /* _ALPHA_CACHEFLUSH_H */

View File

@ -7,7 +7,6 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <asm/compiler.h> #include <asm/compiler.h>
#include <asm/pgtable.h>
#include <asm/machvec.h> #include <asm/machvec.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>

View File

@ -276,15 +276,6 @@ extern inline pte_t pte_mkwrite(pte_t pte) { pte_val(pte) &= ~_PAGE_FOW; return
extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= __DIRTY_BITS; return pte; } extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= __DIRTY_BITS; return pte; }
extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= __ACCESS_BITS; return pte; } extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= __ACCESS_BITS; return pte; }
#define PAGE_DIR_OFFSET(tsk,address) pgd_offset((tsk),(address))
/* to find an entry in a kernel page-table-directory */
#define pgd_offset_k(address) pgd_offset(&init_mm, (address))
/* to find an entry in a page-table-directory. */
#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
/* /*
* The smp_read_barrier_depends() in the following functions are required to * The smp_read_barrier_depends() in the following functions are required to
* order the load of *dir (the pointer in the top level page table) with any * order the load of *dir (the pointer in the top level page table) with any
@ -305,6 +296,7 @@ extern inline pmd_t * pmd_offset(pud_t * dir, unsigned long address)
smp_read_barrier_depends(); /* see above */ smp_read_barrier_depends(); /* see above */
return ret; return ret;
} }
#define pmd_offset pmd_offset
/* Find an entry in the third-level page table.. */ /* Find an entry in the third-level page table.. */
extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address) extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address)
@ -314,9 +306,7 @@ extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address)
smp_read_barrier_depends(); /* see above */ smp_read_barrier_depends(); /* see above */
return ret; return ret;
} }
#define pte_offset_kernel pte_offset_kernel
#define pte_offset_map(dir,addr) pte_offset_kernel((dir),(addr))
#define pte_unmap(pte) do { } while (0)
extern pgd_t swapper_pg_dir[1024]; extern pgd_t swapper_pg_dir[1024];
@ -355,8 +345,6 @@ extern inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
extern void paging_init(void); extern void paging_init(void);
#include <asm-generic/pgtable.h>
/* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */ /* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */
#define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA

View File

@ -37,7 +37,6 @@
#include <asm/reg.h> #include <asm/reg.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/fpu.h> #include <asm/fpu.h>

View File

@ -2,8 +2,6 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/io.h> #include <linux/io.h>
#include <asm/pgtable.h>
/* Prototypes of functions used across modules here in this directory. */ /* Prototypes of functions used across modules here in this directory. */
#define vucp volatile unsigned char * #define vucp volatile unsigned char *

View File

@ -19,7 +19,6 @@
#include <linux/audit.h> #include <linux/audit.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/pgtable.h>
#include <asm/fpu.h> #include <asm/fpu.h>
#include "proto.h" #include "proto.h"

View File

@ -55,7 +55,6 @@ static struct notifier_block alpha_panic_block = {
}; };
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/pgtable.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/dma.h> #include <asm/dma.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>

View File

@ -36,7 +36,6 @@
#include <asm/io.h> #include <asm/io.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
@ -740,7 +739,7 @@ ipi_flush_icache_page(void *x)
} }
void void
flush_icache_user_range(struct vm_area_struct *vma, struct page *page, flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long addr, int len) unsigned long addr, int len)
{ {
struct mm_struct *mm = vma->vm_mm; struct mm_struct *mm = vma->vm_mm;

View File

@ -23,7 +23,6 @@
#include <asm/dma.h> #include <asm/dma.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/pgtable.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -23,7 +23,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_apecs.h> #include <asm/core_apecs.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/core_lca.h> #include <asm/core_lca.h>

View File

@ -26,7 +26,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_tsunami.h> #include <asm/core_tsunami.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -22,7 +22,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_apecs.h> #include <asm/core_apecs.h>
#include <asm/core_lca.h> #include <asm/core_lca.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>

View File

@ -23,7 +23,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_tsunami.h> #include <asm/core_tsunami.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -25,7 +25,6 @@
#include <asm/dma.h> #include <asm/dma.h>
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include "proto.h" #include "proto.h"

View File

@ -18,7 +18,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_marvel.h> #include <asm/core_marvel.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -22,7 +22,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -23,7 +23,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_apecs.h> #include <asm/core_apecs.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -40,7 +40,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_irongate.h> #include <asm/core_irongate.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -24,7 +24,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_apecs.h> #include <asm/core_apecs.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -21,7 +21,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_mcpcia.h> #include <asm/core_mcpcia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -23,7 +23,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -22,7 +22,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_polaris.h> #include <asm/core_polaris.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -21,7 +21,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_t2.h> #include <asm/core_t2.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -25,7 +25,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_apecs.h> #include <asm/core_apecs.h>
#include <asm/core_lca.h> #include <asm/core_lca.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -22,7 +22,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -21,7 +21,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_cia.h> #include <asm/core_cia.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -26,7 +26,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_titan.h> #include <asm/core_titan.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -20,7 +20,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/core_wildfire.h> #include <asm/core_wildfire.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>

View File

@ -121,10 +121,10 @@ dik_show_code(unsigned int *pc)
} }
static void static void
dik_show_trace(unsigned long *sp) dik_show_trace(unsigned long *sp, const char *loglvl)
{ {
long i = 0; long i = 0;
printk("Trace:\n"); printk("%sTrace:\n", loglvl);
while (0x1ff8 & (unsigned long) sp) { while (0x1ff8 & (unsigned long) sp) {
extern char _stext[], _etext[]; extern char _stext[], _etext[];
unsigned long tmp = *sp; unsigned long tmp = *sp;
@ -133,24 +133,24 @@ dik_show_trace(unsigned long *sp)
continue; continue;
if (tmp >= (unsigned long) &_etext) if (tmp >= (unsigned long) &_etext)
continue; continue;
printk("[<%lx>] %pSR\n", tmp, (void *)tmp); printk("%s[<%lx>] %pSR\n", loglvl, tmp, (void *)tmp);
if (i > 40) { if (i > 40) {
printk(" ..."); printk("%s ...", loglvl);
break; break;
} }
} }
printk("\n"); printk("%s\n", loglvl);
} }
static int kstack_depth_to_print = 24; static int kstack_depth_to_print = 24;
void show_stack(struct task_struct *task, unsigned long *sp) void show_stack(struct task_struct *task, unsigned long *sp, const char *loglvl)
{ {
unsigned long *stack; unsigned long *stack;
int i; int i;
/* /*
* debugging aid: "show_stack(NULL);" prints the * debugging aid: "show_stack(NULL, NULL, KERN_EMERG);" prints the
* back trace for this cpu. * back trace for this cpu.
*/ */
if(sp==NULL) if(sp==NULL)
@ -163,14 +163,14 @@ void show_stack(struct task_struct *task, unsigned long *sp)
if ((i % 4) == 0) { if ((i % 4) == 0) {
if (i) if (i)
pr_cont("\n"); pr_cont("\n");
printk(" "); printk("%s ", loglvl);
} else { } else {
pr_cont(" "); pr_cont(" ");
} }
pr_cont("%016lx", *stack++); pr_cont("%016lx", *stack++);
} }
pr_cont("\n"); pr_cont("\n");
dik_show_trace(sp); dik_show_trace(sp, loglvl);
} }
void void
@ -184,7 +184,7 @@ die_if_kernel(char * str, struct pt_regs *regs, long err, unsigned long *r9_15)
printk("%s(%d): %s %ld\n", current->comm, task_pid_nr(current), str, err); printk("%s(%d): %s %ld\n", current->comm, task_pid_nr(current), str, err);
dik_show_regs(regs, r9_15); dik_show_regs(regs, r9_15);
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE); add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
dik_show_trace((unsigned long *)(regs+1)); dik_show_trace((unsigned long *)(regs+1), KERN_DEFAULT);
dik_show_code((unsigned int *)regs->pc); dik_show_code((unsigned int *)regs->pc);
if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) { if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) {
@ -625,7 +625,7 @@ do_entUna(void * va, unsigned long opcode, unsigned long reg,
printk("gp = %016lx sp = %p\n", regs->gp, regs+1); printk("gp = %016lx sp = %p\n", regs->gp, regs+1);
dik_show_code((unsigned int *)pc); dik_show_code((unsigned int *)pc);
dik_show_trace((unsigned long *)(regs+1)); dik_show_trace((unsigned long *)(regs+1), KERN_DEFAULT);
if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) { if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) {
printk("die_if_kernel recursion detected.\n"); printk("die_if_kernel recursion detected.\n");
@ -957,12 +957,12 @@ do_entUnaUser(void __user * va, unsigned long opcode,
si_code = SEGV_ACCERR; si_code = SEGV_ACCERR;
else { else {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
down_read(&mm->mmap_sem); mmap_read_lock(mm);
if (find_vma(mm, (unsigned long)va)) if (find_vma(mm, (unsigned long)va))
si_code = SEGV_ACCERR; si_code = SEGV_ACCERR;
else else
si_code = SEGV_MAPERR; si_code = SEGV_MAPERR;
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
} }
send_sig_fault(SIGSEGV, si_code, va, 0, current); send_sig_fault(SIGSEGV, si_code, va, 0, current);
return; return;

View File

@ -117,7 +117,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
if (user_mode(regs)) if (user_mode(regs))
flags |= FAULT_FLAG_USER; flags |= FAULT_FLAG_USER;
retry: retry:
down_read(&mm->mmap_sem); mmap_read_lock(mm);
vma = find_vma(mm, address); vma = find_vma(mm, address);
if (!vma) if (!vma)
goto bad_area; goto bad_area;
@ -171,7 +171,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
if (fault & VM_FAULT_RETRY) { if (fault & VM_FAULT_RETRY) {
flags |= FAULT_FLAG_TRIED; flags |= FAULT_FLAG_TRIED;
/* No need to up_read(&mm->mmap_sem) as we would /* No need to mmap_read_unlock(mm) as we would
* have already released it in __lock_page_or_retry * have already released it in __lock_page_or_retry
* in mm/filemap.c. * in mm/filemap.c.
*/ */
@ -180,14 +180,14 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
} }
} }
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
return; return;
/* Something tried to access memory that isn't in our memory map. /* Something tried to access memory that isn't in our memory map.
Fix it, but check if it's kernel or user first. */ Fix it, but check if it's kernel or user first. */
bad_area: bad_area:
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
if (user_mode(regs)) if (user_mode(regs))
goto do_sigsegv; goto do_sigsegv;
@ -211,14 +211,14 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
/* We ran out of memory, or some other thing happened to us that /* We ran out of memory, or some other thing happened to us that
made us unable to handle the page fault gracefully. */ made us unable to handle the page fault gracefully. */
out_of_memory: out_of_memory:
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
if (!user_mode(regs)) if (!user_mode(regs))
goto no_context; goto no_context;
pagefault_out_of_memory(); pagefault_out_of_memory();
return; return;
do_sigbus: do_sigbus:
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
/* Send a sigbus, regardless of whether we were in kernel /* Send a sigbus, regardless of whether we were in kernel
or user mode. */ or user mode. */
force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *) address, 0); force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *) address, 0);

View File

@ -24,7 +24,6 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/hwrpb.h> #include <asm/hwrpb.h>
#include <asm/dma.h> #include <asm/dma.h>

View File

@ -13,7 +13,8 @@
struct task_struct; struct task_struct;
void show_regs(struct pt_regs *regs); void show_regs(struct pt_regs *regs);
void show_stacktrace(struct task_struct *tsk, struct pt_regs *regs); void show_stacktrace(struct task_struct *tsk, struct pt_regs *regs,
const char *loglvl);
void show_kernel_fault_diag(const char *str, struct pt_regs *regs, void show_kernel_fault_diag(const char *str, struct pt_regs *regs,
unsigned long address); unsigned long address);
void die(const char *str, struct pt_regs *regs, unsigned long address); void die(const char *str, struct pt_regs *regs, unsigned long address);

View File

@ -248,9 +248,6 @@
extern char empty_zero_page[PAGE_SIZE]; extern char empty_zero_page[PAGE_SIZE];
#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
#define pte_unmap(pte) do { } while (0)
#define pte_unmap_nested(pte) do { } while (0)
#define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval))
#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
@ -282,18 +279,6 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
/* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/
#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT)
#define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
/*
* pte_offset gets a @ptr to PMD entry (PGD in our 2-tier paging system)
* and returns ptr to PTE entry corresponding to @addr
*/
#define pte_offset(dir, addr) ((pte_t *)(pmd_page_vaddr(*dir)) +\
__pte_index(addr))
/* No mapping of Page Tables in high mem etc, so following same as above */
#define pte_offset_kernel(dir, addr) pte_offset(dir, addr)
#define pte_offset_map(dir, addr) pte_offset(dir, addr)
/* Zoo of pte_xxx function */ /* Zoo of pte_xxx function */
#define pte_read(pte) (pte_val(pte) & _PAGE_READ) #define pte_read(pte) (pte_val(pte) & _PAGE_READ)
@ -331,13 +316,6 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
set_pte(ptep, pteval); set_pte(ptep, pteval);
} }
/*
* All kernel related VM pages are in init's mm.
*/
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
#define pgd_index(addr) ((addr) >> PGDIR_SHIFT)
#define pgd_offset(mm, addr) (((mm)->pgd)+pgd_index(addr))
/* /*
* Macro to quickly access the PGD entry, utlising the fact that some * Macro to quickly access the PGD entry, utlising the fact that some
* arch may cache the pointer to Page Directory of "current" task * arch may cache the pointer to Page Directory of "current" task
@ -390,8 +368,6 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
#include <asm/hugepage.h> #include <asm/hugepage.h>
#endif #endif
#include <asm-generic/pgtable.h>
/* to cope with aliasing VIPT cache */ /* to cope with aliasing VIPT cache */
#define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA

View File

@ -90,10 +90,10 @@ SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new)
if (unlikely(ret != -EFAULT)) if (unlikely(ret != -EFAULT))
goto fail; goto fail;
down_read(&current->mm->mmap_sem); mmap_read_lock(current->mm);
ret = fixup_user_fault(current, current->mm, (unsigned long) uaddr, ret = fixup_user_fault(current, current->mm, (unsigned long) uaddr,
FAULT_FLAG_WRITE, NULL); FAULT_FLAG_WRITE, NULL);
up_read(&current->mm->mmap_sem); mmap_read_unlock(current->mm);
if (likely(!ret)) if (likely(!ret))
goto again; goto again;

View File

@ -158,9 +158,11 @@ arc_unwind_core(struct task_struct *tsk, struct pt_regs *regs,
/* Call-back which plugs into unwinding core to dump the stack in /* Call-back which plugs into unwinding core to dump the stack in
* case of panic/OOPs/BUG etc * case of panic/OOPs/BUG etc
*/ */
static int __print_sym(unsigned int address, void *unused) static int __print_sym(unsigned int address, void *arg)
{ {
printk(" %pS\n", (void *)address); const char *loglvl = arg;
printk("%s %pS\n", loglvl, (void *)address);
return 0; return 0;
} }
@ -217,17 +219,18 @@ static int __get_first_nonsched(unsigned int address, void *unused)
*------------------------------------------------------------------------- *-------------------------------------------------------------------------
*/ */
noinline void show_stacktrace(struct task_struct *tsk, struct pt_regs *regs) noinline void show_stacktrace(struct task_struct *tsk, struct pt_regs *regs,
const char *loglvl)
{ {
pr_info("\nStack Trace:\n"); printk("%s\nStack Trace:\n", loglvl);
arc_unwind_core(tsk, regs, __print_sym, NULL); arc_unwind_core(tsk, regs, __print_sym, (void *)loglvl);
} }
EXPORT_SYMBOL(show_stacktrace); EXPORT_SYMBOL(show_stacktrace);
/* Expected by sched Code */ /* Expected by sched Code */
void show_stack(struct task_struct *tsk, unsigned long *sp) void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
{ {
show_stacktrace(tsk, NULL); show_stacktrace(tsk, NULL, loglvl);
} }
/* Another API expected by schedular, shows up in "ps" as Wait Channel /* Another API expected by schedular, shows up in "ps" as Wait Channel

View File

@ -89,7 +89,7 @@ static void show_faulting_vma(unsigned long address)
/* can't use print_vma_addr() yet as it doesn't check for /* can't use print_vma_addr() yet as it doesn't check for
* non-inclusive vma * non-inclusive vma
*/ */
down_read(&active_mm->mmap_sem); mmap_read_lock(active_mm);
vma = find_vma(active_mm, address); vma = find_vma(active_mm, address);
/* check against the find_vma( ) behaviour which returns the next VMA /* check against the find_vma( ) behaviour which returns the next VMA
@ -111,7 +111,7 @@ static void show_faulting_vma(unsigned long address)
} else } else
pr_info(" @No matching VMA found\n"); pr_info(" @No matching VMA found\n");
up_read(&active_mm->mmap_sem); mmap_read_unlock(active_mm);
} }
static void show_ecr_verbose(struct pt_regs *regs) static void show_ecr_verbose(struct pt_regs *regs)
@ -240,5 +240,5 @@ void show_kernel_fault_diag(const char *str, struct pt_regs *regs,
/* Show stack trace if this Fatality happened in kernel mode */ /* Show stack trace if this Fatality happened in kernel mode */
if (!user_mode(regs)) if (!user_mode(regs))
show_stacktrace(current, regs); show_stacktrace(current, regs, KERN_DEFAULT);
} }

View File

@ -107,7 +107,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
flags |= FAULT_FLAG_WRITE; flags |= FAULT_FLAG_WRITE;
retry: retry:
down_read(&mm->mmap_sem); mmap_read_lock(mm);
vma = find_vma(mm, address); vma = find_vma(mm, address);
if (!vma) if (!vma)
@ -141,7 +141,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
} }
/* /*
* Fault retry nuances, mmap_sem already relinquished by core mm * Fault retry nuances, mmap_lock already relinquished by core mm
*/ */
if (unlikely((fault & VM_FAULT_RETRY) && if (unlikely((fault & VM_FAULT_RETRY) &&
(flags & FAULT_FLAG_ALLOW_RETRY))) { (flags & FAULT_FLAG_ALLOW_RETRY))) {
@ -150,7 +150,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
} }
bad_area: bad_area:
up_read(&mm->mmap_sem); mmap_read_unlock(mm);
/* /*
* Major/minor page fault accounting * Major/minor page fault accounting

View File

@ -6,8 +6,8 @@
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/highmem.h> #include <linux/highmem.h>
#include <linux/pgtable.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
@ -92,17 +92,9 @@ EXPORT_SYMBOL(kunmap_atomic_high);
static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr) static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
{ {
pgd_t *pgd_k; pmd_t *pmd_k = pmd_off_k(kvaddr);
p4d_t *p4d_k;
pud_t *pud_k;
pmd_t *pmd_k;
pte_t *pte_k; pte_t *pte_k;
pgd_k = pgd_offset_k(kvaddr);
p4d_k = p4d_offset(pgd_k, kvaddr);
pud_k = pud_offset(p4d_k, kvaddr);
pmd_k = pmd_offset(pud_k, kvaddr);
pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE); pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
if (!pte_k) if (!pte_k)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n", panic("%s: Failed to allocate %lu bytes align=0x%lx\n",

View File

@ -33,9 +33,9 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/pgtable.h>
#include <asm/entry.h> #include <asm/entry.h>
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/pgtable.h>
#include <asm/arcregs.h> #include <asm/arcregs.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/processor.h> #include <asm/processor.h>

View File

@ -82,7 +82,8 @@ void hook_ifault_code(int nr, int (*fn)(unsigned long, unsigned int,
struct pt_regs *), struct pt_regs *),
int sig, int code, const char *name); int sig, int code, const char *name);
extern asmlinkage void c_backtrace(unsigned long fp, int pmode); extern asmlinkage void c_backtrace(unsigned long fp, int pmode,
const char *loglvl);
struct mm_struct; struct mm_struct;
void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr); void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr);

View File

@ -258,11 +258,11 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr
#define flush_cache_dup_mm(mm) flush_cache_mm(mm) #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
/* /*
* flush_cache_user_range is used when we want to ensure that the * flush_icache_user_range is used when we want to ensure that the
* Harvard caches are synchronised for the user space address range. * Harvard caches are synchronised for the user space address range.
* This is used for the ARM private sys_cacheflush system call. * This is used for the ARM private sys_cacheflush system call.
*/ */
#define flush_cache_user_range(s,e) __cpuc_coherent_user_range(s,e) #define flush_icache_user_range(s,e) __cpuc_coherent_user_range(s,e)
/* /*
* Perform necessary cache operations to ensure that data previously * Perform necessary cache operations to ensure that data previously
@ -318,9 +318,6 @@ extern void flush_kernel_dcache_page(struct page *);
#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)
#define flush_icache_user_range(vma,page,addr,len) \
flush_dcache_page(page)
/* /*
* We don't appear to need to do anything here. In fact, if we did, we'd * We don't appear to need to do anything here. In fact, if we did, we'd
* duplicate cache flushing elsewhere performed by flush_dcache_page(). * duplicate cache flushing elsewhere performed by flush_dcache_page().

View File

@ -13,7 +13,6 @@
#include <asm/highmem.h> #include <asm/highmem.h>
#include <asm/mach/map.h> #include <asm/mach/map.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#ifdef CONFIG_EFI #ifdef CONFIG_EFI

View File

@ -6,8 +6,8 @@
#define FIXADDR_END 0xfff00000UL #define FIXADDR_END 0xfff00000UL
#define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE) #define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE)
#include <linux/pgtable.h>
#include <asm/kmap_types.h> #include <asm/kmap_types.h>
#include <asm/pgtable.h>
enum fixed_addresses { enum fixed_addresses {
FIX_EARLYCON_MEM_BASE, FIX_EARLYCON_MEM_BASE,

View File

@ -3,7 +3,7 @@
#define __ASM_IDMAP_H #define __ASM_IDMAP_H
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/pgtable.h> #include <linux/pgtable.h>
/* Tag a function as requiring to be executed via an identity mapping. */ /* Tag a function as requiring to be executed via an identity mapping. */
#define __idmap __section(.idmap.text) noinline notrace #define __idmap __section(.idmap.text) noinline notrace

View File

@ -187,6 +187,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
{ {
return (pmd_t *)pud; return (pmd_t *)pud;
} }
#define pmd_offset pmd_offset
#define pmd_large(pmd) (pmd_val(pmd) & 2) #define pmd_large(pmd) (pmd_val(pmd) & 2)
#define pmd_leaf(pmd) (pmd_val(pmd) & 2) #define pmd_leaf(pmd) (pmd_val(pmd) & 2)

View File

@ -133,13 +133,6 @@ static inline pmd_t *pud_page_vaddr(pud_t pud)
return __va(pud_val(pud) & PHYS_MASK & (s32)PAGE_MASK); return __va(pud_val(pud) & PHYS_MASK & (s32)PAGE_MASK);
} }
/* Find an entry in the second-level page table.. */
#define pmd_index(addr) (((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
{
return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(addr);
}
#define pmd_bad(pmd) (!(pmd_val(pmd) & 2)) #define pmd_bad(pmd) (!(pmd_val(pmd) & 2))
#define copy_pmd(pmdpd,pmdps) \ #define copy_pmd(pmdpd,pmdps) \

View File

@ -22,7 +22,6 @@
#define pgd_bad(pgd) (0) #define pgd_bad(pgd) (0)
#define pgd_clear(pgdp) #define pgd_clear(pgdp)
#define kern_addr_valid(addr) (1) #define kern_addr_valid(addr) (1)
#define pmd_offset(a, b) ((void *)0)
/* FIXME */ /* FIXME */
/* /*
* PMD_SHIFT determines the size of the area a second-level page table can map * PMD_SHIFT determines the size of the area a second-level page table can map
@ -73,8 +72,6 @@ extern unsigned int kobjsize(const void *objp);
#define FIRST_USER_ADDRESS 0UL #define FIRST_USER_ADDRESS 0UL
#include <asm-generic/pgtable.h>
#else #else
/* /*

View File

@ -166,14 +166,6 @@ extern struct page *empty_zero_page;
extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
/* to find an entry in a page-table-directory */
#define pgd_index(addr) ((addr) >> PGDIR_SHIFT)
#define pgd_offset(mm, addr) ((mm)->pgd + pgd_index(addr))
/* to find an entry in a kernel page-table-directory */
#define pgd_offset_k(addr) pgd_offset(&init_mm, addr)
#define pmd_none(pmd) (!pmd_val(pmd)) #define pmd_none(pmd) (!pmd_val(pmd))
static inline pte_t *pmd_page_vaddr(pmd_t pmd) static inline pte_t *pmd_page_vaddr(pmd_t pmd)
@ -183,21 +175,6 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) #define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK))
#ifndef CONFIG_HIGHPTE
#define __pte_map(pmd) pmd_page_vaddr(*(pmd))
#define __pte_unmap(pte) do { } while (0)
#else
#define __pte_map(pmd) (pte_t *)kmap_atomic(pmd_page(*(pmd)))
#define __pte_unmap(pte) kunmap_atomic(pte)
#endif
#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
#define pte_offset_kernel(pmd,addr) (pmd_page_vaddr(*(pmd)) + pte_index(addr))
#define pte_offset_map(pmd,addr) (__pte_map(pmd) + pte_index(addr))
#define pte_unmap(pte) __pte_unmap(pte)
#define pte_pfn(pte) ((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT) #define pte_pfn(pte) ((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT)
#define pfn_pte(pfn,prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) #define pfn_pte(pfn,prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot))
@ -339,8 +316,6 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
/* FIXME: this is not correct */ /* FIXME: this is not correct */
#define kern_addr_valid(addr) (1) #define kern_addr_valid(addr) (1)
#include <asm-generic/pgtable.h>
/* /*
* We provide our own arch_get_unmapped_area to cope with VIPT caches. * We provide our own arch_get_unmapped_area to cope with VIPT caches.
*/ */

View File

@ -29,7 +29,8 @@ static inline int __in_irqentry_text(unsigned long ptr)
} }
extern void __init early_trap_init(void *); extern void __init early_trap_init(void *);
extern void dump_backtrace_entry(unsigned long where, unsigned long from, unsigned long frame); extern void dump_backtrace_entry(unsigned long where, unsigned long from,
unsigned long frame, const char *loglvl);
extern void ptrace_break(struct pt_regs *regs); extern void ptrace_break(struct pt_regs *regs);
extern void *vectors_page; extern void *vectors_page;

View File

@ -36,7 +36,8 @@ extern struct unwind_table *unwind_table_add(unsigned long start,
unsigned long text_addr, unsigned long text_addr,
unsigned long text_size); unsigned long text_size);
extern void unwind_table_del(struct unwind_table *tab); extern void unwind_table_del(struct unwind_table *tab);
extern void unwind_backtrace(struct pt_regs *regs, struct task_struct *tsk); extern void unwind_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */

View File

@ -98,8 +98,8 @@ void set_fiq_handler(void *start, unsigned int length)
memcpy(base + offset, start, length); memcpy(base + offset, start, length);
if (!cache_is_vipt_nonaliasing()) if (!cache_is_vipt_nonaliasing())
flush_icache_range((unsigned long)base + offset, offset + flush_icache_range((unsigned long)base + offset,
length); (unsigned long)base + offset + length);
flush_icache_range(0xffff0000 + offset, 0xffff0000 + offset + length); flush_icache_range(0xffff0000 + offset, 0xffff0000 + offset + length);
} }

View File

@ -10,6 +10,7 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/pgtable.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/cp15.h> #include <asm/cp15.h>
@ -18,7 +19,6 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/pgtable.h>
#if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_SEMIHOSTING) #if defined(CONFIG_DEBUG_LL) && !defined(CONFIG_DEBUG_SEMIHOSTING)
#include CONFIG_DEBUG_LL_INCLUDE #include CONFIG_DEBUG_LL_INCLUDE

View File

@ -10,7 +10,6 @@
#include <linux/io.h> #include <linux/io.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <asm/pgtable.h>
#include <linux/of_fdt.h> #include <linux/of_fdt.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>

View File

@ -17,7 +17,6 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <asm/pgtable.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
#include <asm/unwind.h> #include <asm/unwind.h>

View File

@ -431,7 +431,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
npages = 1; /* for sigpage */ npages = 1; /* for sigpage */
npages += vdso_total_pages; npages += vdso_total_pages;
if (down_write_killable(&mm->mmap_sem)) if (mmap_write_lock_killable(mm))
return -EINTR; return -EINTR;
hint = sigpage_addr(mm, npages); hint = sigpage_addr(mm, npages);
addr = get_unmapped_area(NULL, hint, npages << PAGE_SHIFT, 0, 0); addr = get_unmapped_area(NULL, hint, npages << PAGE_SHIFT, 0, 0);
@ -458,7 +458,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
arm_install_vdso(mm, addr + PAGE_SIZE); arm_install_vdso(mm, addr + PAGE_SIZE);
up_fail: up_fail:
up_write(&mm->mmap_sem); mmap_write_unlock(mm);
return ret; return ret;
} }
#endif #endif

View File

@ -25,7 +25,6 @@
#include <linux/tracehook.h> #include <linux/tracehook.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#include <asm/pgtable.h>
#include <asm/traps.h> #include <asm/traps.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS

View File

@ -37,7 +37,6 @@
#include <asm/idmap.h> #include <asm/idmap.h>
#include <asm/topology.h> #include <asm/topology.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/procinfo.h> #include <asm/procinfo.h>
#include <asm/processor.h> #include <asm/processor.h>

View File

@ -2,12 +2,12 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/mm_types.h> #include <linux/mm_types.h>
#include <linux/pgtable.h>
#include <asm/bugs.h> #include <asm/bugs.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/idmap.h> #include <asm/idmap.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/pgtable.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
#include <asm/suspend.h> #include <asm/suspend.h>

View File

@ -97,12 +97,12 @@ static void set_segfault(struct pt_regs *regs, unsigned long addr)
{ {
int si_code; int si_code;
down_read(&current->mm->mmap_sem); mmap_read_lock(current->mm);
if (find_vma(current->mm, addr) == NULL) if (find_vma(current->mm, addr) == NULL)
si_code = SEGV_MAPERR; si_code = SEGV_MAPERR;
else else
si_code = SEGV_ACCERR; si_code = SEGV_ACCERR;
up_read(&current->mm->mmap_sem); mmap_read_unlock(current->mm);
pr_debug("SWP{B} emulation: access caused memory abort!\n"); pr_debug("SWP{B} emulation: access caused memory abort!\n");
arm_notify_die("Illegal memory access", regs, arm_notify_die("Illegal memory access", regs,

View File

@ -62,21 +62,24 @@ __setup("user_debug=", user_debug_setup);
static void dump_mem(const char *, const char *, unsigned long, unsigned long); static void dump_mem(const char *, const char *, unsigned long, unsigned long);
void dump_backtrace_entry(unsigned long where, unsigned long from, unsigned long frame) void dump_backtrace_entry(unsigned long where, unsigned long from,
unsigned long frame, const char *loglvl)
{ {
unsigned long end = frame + 4 + sizeof(struct pt_regs); unsigned long end = frame + 4 + sizeof(struct pt_regs);
#ifdef CONFIG_KALLSYMS #ifdef CONFIG_KALLSYMS
printk("[<%08lx>] (%ps) from [<%08lx>] (%pS)\n", where, (void *)where, from, (void *)from); printk("%s[<%08lx>] (%ps) from [<%08lx>] (%pS)\n",
loglvl, where, (void *)where, from, (void *)from);
#else #else
printk("Function entered at [<%08lx>] from [<%08lx>]\n", where, from); printk("%sFunction entered at [<%08lx>] from [<%08lx>]\n",
loglvl, where, from);
#endif #endif
if (in_entry_text(from) && end <= ALIGN(frame, THREAD_SIZE)) if (in_entry_text(from) && end <= ALIGN(frame, THREAD_SIZE))
dump_mem("", "Exception stack", frame + 4, end); dump_mem(loglvl, "Exception stack", frame + 4, end);
} }
void dump_backtrace_stm(u32 *stack, u32 instruction) void dump_backtrace_stm(u32 *stack, u32 instruction, const char *loglvl)
{ {
char str[80], *p; char str[80], *p;
unsigned int x; unsigned int x;
@ -88,12 +91,12 @@ void dump_backtrace_stm(u32 *stack, u32 instruction)
if (++x == 6) { if (++x == 6) {
x = 0; x = 0;
p = str; p = str;
printk("%s\n", str); printk("%s%s\n", loglvl, str);
} }
} }
} }
if (p != str) if (p != str)
printk("%s\n", str); printk("%s%s\n", loglvl, str);
} }
#ifndef CONFIG_ARM_UNWIND #ifndef CONFIG_ARM_UNWIND
@ -201,17 +204,19 @@ static void dump_instr(const char *lvl, struct pt_regs *regs)
} }
#ifdef CONFIG_ARM_UNWIND #ifdef CONFIG_ARM_UNWIND
static inline void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk) static inline void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl)
{ {
unwind_backtrace(regs, tsk); unwind_backtrace(regs, tsk, loglvl);
} }
#else #else
static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk) static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl)
{ {
unsigned int fp, mode; unsigned int fp, mode;
int ok = 1; int ok = 1;
printk("Backtrace: "); printk("%sBacktrace: ", loglvl);
if (!tsk) if (!tsk)
tsk = current; tsk = current;
@ -238,13 +243,13 @@ static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
pr_cont("\n"); pr_cont("\n");
if (ok) if (ok)
c_backtrace(fp, mode); c_backtrace(fp, mode, loglvl);
} }
#endif #endif
void show_stack(struct task_struct *tsk, unsigned long *sp) void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
{ {
dump_backtrace(NULL, tsk); dump_backtrace(NULL, tsk, loglvl);
barrier(); barrier();
} }
@ -288,7 +293,7 @@ static int __die(const char *str, int err, struct pt_regs *regs)
if (!user_mode(regs) || in_interrupt()) { if (!user_mode(regs) || in_interrupt()) {
dump_mem(KERN_EMERG, "Stack: ", regs->ARM_sp, dump_mem(KERN_EMERG, "Stack: ", regs->ARM_sp,
THREAD_SIZE + (unsigned long)task_stack_page(tsk)); THREAD_SIZE + (unsigned long)task_stack_page(tsk));
dump_backtrace(regs, tsk); dump_backtrace(regs, tsk, KERN_EMERG);
dump_instr(KERN_EMERG, regs); dump_instr(KERN_EMERG, regs);
} }
@ -566,7 +571,7 @@ __do_cache_op(unsigned long start, unsigned long end)
if (fatal_signal_pending(current)) if (fatal_signal_pending(current))
return 0; return 0;
ret = flush_cache_user_range(start, start + chunk); ret = flush_icache_user_range(start, start + chunk);
if (ret) if (ret)
return ret; return ret;
@ -663,10 +668,10 @@ asmlinkage int arm_syscall(int no, struct pt_regs *regs)
if (user_debug & UDBG_SYSCALL) { if (user_debug & UDBG_SYSCALL) {
pr_err("[%d] %s: arm syscall %d\n", pr_err("[%d] %s: arm syscall %d\n",
task_pid_nr(current), current->comm, no); task_pid_nr(current), current->comm, no);
dump_instr("", regs); dump_instr(KERN_ERR, regs);
if (user_mode(regs)) { if (user_mode(regs)) {
__show_regs(regs); __show_regs(regs);
c_backtrace(frame_pointer(regs), processor_mode(regs)); c_backtrace(frame_pointer(regs), processor_mode(regs), KERN_ERR);
} }
} }
#endif #endif

View File

@ -455,7 +455,8 @@ int unwind_frame(struct stackframe *frame)
return URC_OK; return URC_OK;
} }
void unwind_backtrace(struct pt_regs *regs, struct task_struct *tsk) void unwind_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl)
{ {
struct stackframe frame; struct stackframe frame;
@ -493,7 +494,7 @@ void unwind_backtrace(struct pt_regs *regs, struct task_struct *tsk)
urc = unwind_frame(&frame); urc = unwind_frame(&frame);
if (urc < 0) if (urc < 0)
break; break;
dump_backtrace_entry(where, frame.pc, frame.sp - 4); dump_backtrace_entry(where, frame.pc, frame.sp - 4, loglvl);
} }
} }

View File

@ -240,7 +240,7 @@ static int install_vvar(struct mm_struct *mm, unsigned long addr)
return PTR_ERR_OR_ZERO(vma); return PTR_ERR_OR_ZERO(vma);
} }
/* assumes mmap_sem is write-locked */ /* assumes mmap_lock is write-locked */
void arm_install_vdso(struct mm_struct *mm, unsigned long addr) void arm_install_vdso(struct mm_struct *mm, unsigned long addr)
{ {
struct vm_area_struct *vma; struct vm_area_struct *vma;

View File

@ -8,13 +8,13 @@
#include "vmlinux-xip.lds.S" #include "vmlinux-xip.lds.S"
#else #else
#include <linux/pgtable.h>
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/mpu.h> #include <asm/mpu.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable.h>
#include "vmlinux.lds.h" #include "vmlinux.lds.h"

View File

@ -17,6 +17,7 @@
#define sv_pc r6 #define sv_pc r6
#define mask r7 #define mask r7
#define sv_lr r8 #define sv_lr r8
#define loglvl r9
ENTRY(c_backtrace) ENTRY(c_backtrace)
@ -99,6 +100,7 @@ ENDPROC(c_backtrace)
@ to ensure 8 byte alignment @ to ensure 8 byte alignment
movs frame, r0 @ if frame pointer is zero movs frame, r0 @ if frame pointer is zero
beq no_frame @ we have no stack frames beq no_frame @ we have no stack frames
mov loglvl, r2
tst r1, #0x10 @ 26 or 32-bit mode? tst r1, #0x10 @ 26 or 32-bit mode?
moveq mask, #0xfc000003 moveq mask, #0xfc000003
movne mask, #0 @ mask for 32-bit movne mask, #0 @ mask for 32-bit
@ -167,6 +169,7 @@ finished_setup:
mov r1, sv_lr mov r1, sv_lr
mov r2, frame mov r2, frame
bic r1, r1, mask @ mask PC/LR for the mode bic r1, r1, mask @ mask PC/LR for the mode
mov r3, loglvl
bl dump_backtrace_entry bl dump_backtrace_entry
/* /*
@ -183,6 +186,7 @@ finished_setup:
ldr r0, [frame] @ locals are stored in ldr r0, [frame] @ locals are stored in
@ the preceding frame @ the preceding frame
subeq r0, r0, #4 subeq r0, r0, #4
mov r2, loglvl
bleq dump_backtrace_stm @ dump saved registers bleq dump_backtrace_stm @ dump saved registers
/* /*
@ -196,7 +200,8 @@ finished_setup:
bhi for_each_frame bhi for_each_frame
1006: adr r0, .Lbad 1006: adr r0, .Lbad
mov r1, frame mov r1, loglvl
mov r2, frame
bl printk bl printk
no_frame: ldmfd sp!, {r4 - r9, fp, pc} no_frame: ldmfd sp!, {r4 - r9, fp, pc}
ENDPROC(c_backtrace) ENDPROC(c_backtrace)
@ -209,7 +214,7 @@ ENDPROC(c_backtrace)
.long 1005b, 1006b .long 1005b, 1006b
.popsection .popsection
.Lbad: .asciz "Backtrace aborted due to bad frame pointer <%p>\n" .Lbad: .asciz "%sBacktrace aborted due to bad frame pointer <%p>\n"
.align .align
.Lopcode: .word 0xe92d4800 >> 11 @ stmfd sp!, {... fp, lr} .Lopcode: .word 0xe92d4800 >> 11 @ stmfd sp!, {... fp, lr}
.word 0x0b000000 @ bl if these bits are set .word 0x0b000000 @ bl if these bits are set

View File

@ -18,6 +18,7 @@
#define sv_pc r6 #define sv_pc r6
#define mask r7 #define mask r7
#define offset r8 #define offset r8
#define loglvl r9
ENTRY(c_backtrace) ENTRY(c_backtrace)
@ -25,9 +26,10 @@ ENTRY(c_backtrace)
ret lr ret lr
ENDPROC(c_backtrace) ENDPROC(c_backtrace)
#else #else
stmfd sp!, {r4 - r8, lr} @ Save an extra register so we have a location... stmfd sp!, {r4 - r9, lr} @ Save an extra register so we have a location...
movs frame, r0 @ if frame pointer is zero movs frame, r0 @ if frame pointer is zero
beq no_frame @ we have no stack frames beq no_frame @ we have no stack frames
mov loglvl, r2
tst r1, #0x10 @ 26 or 32-bit mode? tst r1, #0x10 @ 26 or 32-bit mode?
ARM( moveq mask, #0xfc000003 ) ARM( moveq mask, #0xfc000003 )
@ -73,6 +75,7 @@ for_each_frame: tst frame, mask @ Check for address exceptions
ldr r1, [frame, #-4] @ get saved lr ldr r1, [frame, #-4] @ get saved lr
mov r2, frame mov r2, frame
bic r1, r1, mask @ mask PC/LR for the mode bic r1, r1, mask @ mask PC/LR for the mode
mov r3, loglvl
bl dump_backtrace_entry bl dump_backtrace_entry
ldr r1, [sv_pc, #-4] @ if stmfd sp!, {args} exists, ldr r1, [sv_pc, #-4] @ if stmfd sp!, {args} exists,
@ -80,12 +83,14 @@ for_each_frame: tst frame, mask @ Check for address exceptions
teq r3, r1, lsr #11 teq r3, r1, lsr #11
ldreq r0, [frame, #-8] @ get sp ldreq r0, [frame, #-8] @ get sp
subeq r0, r0, #4 @ point at the last arg subeq r0, r0, #4 @ point at the last arg
mov r2, loglvl
bleq dump_backtrace_stm @ dump saved registers bleq dump_backtrace_stm @ dump saved registers
1004: ldr r1, [sv_pc, #0] @ if stmfd sp!, {..., fp, ip, lr, pc} 1004: ldr r1, [sv_pc, #0] @ if stmfd sp!, {..., fp, ip, lr, pc}
ldr r3, .Ldsi @ instruction exists, ldr r3, .Ldsi @ instruction exists,
teq r3, r1, lsr #11 teq r3, r1, lsr #11
subeq r0, frame, #16 subeq r0, frame, #16
mov r2, loglvl
bleq dump_backtrace_stm @ dump saved registers bleq dump_backtrace_stm @ dump saved registers
teq sv_fp, #0 @ zero saved fp means teq sv_fp, #0 @ zero saved fp means
@ -96,9 +101,10 @@ for_each_frame: tst frame, mask @ Check for address exceptions
bhi for_each_frame bhi for_each_frame
1006: adr r0, .Lbad 1006: adr r0, .Lbad
mov r1, frame mov r1, loglvl
mov r2, frame
bl printk bl printk
no_frame: ldmfd sp!, {r4 - r8, pc} no_frame: ldmfd sp!, {r4 - r9, pc}
ENDPROC(c_backtrace) ENDPROC(c_backtrace)
.pushsection __ex_table,"a" .pushsection __ex_table,"a"
@ -109,7 +115,7 @@ ENDPROC(c_backtrace)
.long 1004b, 1006b .long 1004b, 1006b
.popsection .popsection
.Lbad: .asciz "Backtrace aborted due to bad frame pointer <%p>\n" .Lbad: .asciz "%sBacktrace aborted due to bad frame pointer <%p>\n"
.align .align
.Ldsi: .word 0xe92dd800 >> 11 @ stmfd sp!, {... fp, ip, lr, pc} .Ldsi: .word 0xe92dd800 >> 11 @ stmfd sp!, {... fp, ip, lr, pc}
.word 0xe92d0000 >> 11 @ stmfd sp!, {} .word 0xe92d0000 >> 11 @ stmfd sp!, {}

View File

@ -101,7 +101,7 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
atomic = faulthandler_disabled(); atomic = faulthandler_disabled();
if (!atomic) if (!atomic)
down_read(&current->mm->mmap_sem); mmap_read_lock(current->mm);
while (n) { while (n) {
pte_t *pte; pte_t *pte;
spinlock_t *ptl; spinlock_t *ptl;
@ -109,11 +109,11 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
while (!pin_page_for_write(to, &pte, &ptl)) { while (!pin_page_for_write(to, &pte, &ptl)) {
if (!atomic) if (!atomic)
up_read(&current->mm->mmap_sem); mmap_read_unlock(current->mm);
if (__put_user(0, (char __user *)to)) if (__put_user(0, (char __user *)to))
goto out; goto out;
if (!atomic) if (!atomic)
down_read(&current->mm->mmap_sem); mmap_read_lock(current->mm);
} }
tocopy = (~(unsigned long)to & ~PAGE_MASK) + 1; tocopy = (~(unsigned long)to & ~PAGE_MASK) + 1;
@ -133,7 +133,7 @@ __copy_to_user_memcpy(void __user *to, const void *from, unsigned long n)
spin_unlock(ptl); spin_unlock(ptl);
} }
if (!atomic) if (!atomic)
up_read(&current->mm->mmap_sem); mmap_read_unlock(current->mm);
out: out:
return n; return n;
@ -170,17 +170,17 @@ __clear_user_memset(void __user *addr, unsigned long n)
return 0; return 0;
} }
down_read(&current->mm->mmap_sem); mmap_read_lock(current->mm);
while (n) { while (n) {
pte_t *pte; pte_t *pte;
spinlock_t *ptl; spinlock_t *ptl;
int tocopy; int tocopy;
while (!pin_page_for_write(addr, &pte, &ptl)) { while (!pin_page_for_write(addr, &pte, &ptl)) {
up_read(&current->mm->mmap_sem); mmap_read_unlock(current->mm);
if (__put_user(0, (char __user *)addr)) if (__put_user(0, (char __user *)addr))
goto out; goto out;
down_read(&current->mm->mmap_sem); mmap_read_lock(current->mm);
} }
tocopy = (~(unsigned long)addr & ~PAGE_MASK) + 1; tocopy = (~(unsigned long)addr & ~PAGE_MASK) + 1;
@ -198,7 +198,7 @@ __clear_user_memset(void __user *addr, unsigned long n)
else else
spin_unlock(ptl); spin_unlock(ptl);
} }
up_read(&current->mm->mmap_sem); mmap_read_unlock(current->mm);
out: out:
return n; return n;

View File

@ -17,7 +17,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/mach-types.h> #include <asm/mach-types.h>
#include <asm/pgtable.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/system_misc.h> #include <asm/system_misc.h>

Some files were not shown because too many files have changed in this diff Show More