Linux 5.0

-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAlx8YYIeHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGs5MIAIiVVIH+C0St60vf
 nzfGpVe+BETe199UveM4Ic2FWMk97ZhSk5Oj6HwYY9vnw4iwoRCZIO5B8Dna4nxY
 8XjiwxpJRVLq+7Y1d61O6NHo6UjFHF0GMzyeJeNNUq+mCISxZdLsqzsszt9X09mA
 GoJjZ0UMw2Tkz/s3Ie4MumKASc+y2CjJc0ZVEZlJsMaqMJLIfUn/CrTzHBivmuqJ
 sV6ZkP4as6h87bI9mi79p8pzvVooCRJ10cg4A/DHG4t2bEAIlB4t5dfZRFzVMhVo
 cCPRk9tiA9y4I3zBjcuAZMODcBpfdWoQK8TqYw2cDS3LEDMgnEdIH6snMYHr69z0
 kZJjA2A=
 =Qs0l
 -----END PGP SIGNATURE-----

Merge v5.0 into drm-next

There is a really hairy resolution involving amdgpu fixes, that I'd rather confirm here.

Also some misc fixes are landed by me, but the pr has them as well.

Signed-off-by: Dave Airlie <airlied@redhat.com>
This commit is contained in:
Dave Airlie
2019-03-04 12:02:55 +10:00
315 changed files with 2489 additions and 1526 deletions

20
CREDITS
View File

@ -842,10 +842,9 @@ D: ax25-utils maintainer.
N: Helge Deller N: Helge Deller
E: deller@gmx.de E: deller@gmx.de
E: hdeller@redhat.de W: http://www.parisc-linux.org/
D: PA-RISC Linux hacker, LASI-, ASP-, WAX-, LCD/LED-driver D: PA-RISC Linux architecture maintainer
S: Schimmelsrain 1 D: LASI-, ASP-, WAX-, LCD/LED-driver
S: D-69231 Rauenberg
S: Germany S: Germany
N: Jean Delvare N: Jean Delvare
@ -1361,7 +1360,7 @@ S: Stellenbosch, Western Cape
S: South Africa S: South Africa
N: Grant Grundler N: Grant Grundler
E: grundler@parisc-linux.org E: grantgrundler@gmail.com
W: http://obmouse.sourceforge.net/ W: http://obmouse.sourceforge.net/
W: http://www.parisc-linux.org/ W: http://www.parisc-linux.org/
D: obmouse - rewrote Olivier Florent's Omnibook 600 "pop-up" mouse driver D: obmouse - rewrote Olivier Florent's Omnibook 600 "pop-up" mouse driver
@ -2492,7 +2491,7 @@ S: Syracuse, New York 13206
S: USA S: USA
N: Kyle McMartin N: Kyle McMartin
E: kyle@parisc-linux.org E: kyle@mcmartin.ca
D: Linux/PARISC hacker D: Linux/PARISC hacker
D: AD1889 sound driver D: AD1889 sound driver
S: Ottawa, Canada S: Ottawa, Canada
@ -3780,14 +3779,13 @@ S: 21513 Conradia Ct
S: Cupertino, CA 95014 S: Cupertino, CA 95014
S: USA S: USA
N: Thibaut Varene N: Thibaut Varène
E: T-Bone@parisc-linux.org E: hacks+kernel@slashdirt.org
W: http://www.parisc-linux.org/~varenet/ W: http://hacks.slashdirt.org/
P: 1024D/B7D2F063 E67C 0D43 A75E 12A5 BB1C FA2F 1E32 C3DA B7D2 F063
D: PA-RISC port minion, PDC and GSCPS2 drivers, debuglocks and other bits D: PA-RISC port minion, PDC and GSCPS2 drivers, debuglocks and other bits
D: Some ARM at91rm9200 bits, S1D13XXX FB driver, random patches here and there D: Some ARM at91rm9200 bits, S1D13XXX FB driver, random patches here and there
D: AD1889 sound driver D: AD1889 sound driver
S: Paris, France S: France
N: Heikki Vatiainen N: Heikki Vatiainen
E: hessu@cs.tut.fi E: hessu@cs.tut.fi

View File

@ -1,9 +1,9 @@
.. _readme: .. _readme:
Linux kernel release 4.x <http://kernel.org/> Linux kernel release 5.x <http://kernel.org/>
============================================= =============================================
These are the release notes for Linux version 4. Read them carefully, These are the release notes for Linux version 5. Read them carefully,
as they tell you what this is all about, explain how to install the as they tell you what this is all about, explain how to install the
kernel, and what to do if something goes wrong. kernel, and what to do if something goes wrong.
@ -63,7 +63,7 @@ Installing the kernel source
directory where you have permissions (e.g. your home directory) and directory where you have permissions (e.g. your home directory) and
unpack it:: unpack it::
xz -cd linux-4.X.tar.xz | tar xvf - xz -cd linux-5.x.tar.xz | tar xvf -
Replace "X" with the version number of the latest kernel. Replace "X" with the version number of the latest kernel.
@ -72,26 +72,26 @@ Installing the kernel source
files. They should match the library, and not get messed up by files. They should match the library, and not get messed up by
whatever the kernel-du-jour happens to be. whatever the kernel-du-jour happens to be.
- You can also upgrade between 4.x releases by patching. Patches are - You can also upgrade between 5.x releases by patching. Patches are
distributed in the xz format. To install by patching, get all the distributed in the xz format. To install by patching, get all the
newer patch files, enter the top level directory of the kernel source newer patch files, enter the top level directory of the kernel source
(linux-4.X) and execute:: (linux-5.x) and execute::
xz -cd ../patch-4.x.xz | patch -p1 xz -cd ../patch-5.x.xz | patch -p1
Replace "x" for all versions bigger than the version "X" of your current Replace "x" for all versions bigger than the version "x" of your current
source tree, **in_order**, and you should be ok. You may want to remove source tree, **in_order**, and you should be ok. You may want to remove
the backup files (some-file-name~ or some-file-name.orig), and make sure the backup files (some-file-name~ or some-file-name.orig), and make sure
that there are no failed patches (some-file-name# or some-file-name.rej). that there are no failed patches (some-file-name# or some-file-name.rej).
If there are, either you or I have made a mistake. If there are, either you or I have made a mistake.
Unlike patches for the 4.x kernels, patches for the 4.x.y kernels Unlike patches for the 5.x kernels, patches for the 5.x.y kernels
(also known as the -stable kernels) are not incremental but instead apply (also known as the -stable kernels) are not incremental but instead apply
directly to the base 4.x kernel. For example, if your base kernel is 4.0 directly to the base 5.x kernel. For example, if your base kernel is 5.0
and you want to apply the 4.0.3 patch, you must not first apply the 4.0.1 and you want to apply the 5.0.3 patch, you must not first apply the 5.0.1
and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and and 5.0.2 patches. Similarly, if you are running kernel version 5.0.2 and
want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is, want to jump to 5.0.3, you must first reverse the 5.0.2 patch (that is,
patch -R) **before** applying the 4.0.3 patch. You can read more on this in patch -R) **before** applying the 5.0.3 patch. You can read more on this in
:ref:`Documentation/process/applying-patches.rst <applying_patches>`. :ref:`Documentation/process/applying-patches.rst <applying_patches>`.
Alternatively, the script patch-kernel can be used to automate this Alternatively, the script patch-kernel can be used to automate this
@ -114,7 +114,7 @@ Installing the kernel source
Software requirements Software requirements
--------------------- ---------------------
Compiling and running the 4.x kernels requires up-to-date Compiling and running the 5.x kernels requires up-to-date
versions of various software packages. Consult versions of various software packages. Consult
:ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers :ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers
required and how to get updates for these packages. Beware that using required and how to get updates for these packages. Beware that using
@ -132,12 +132,12 @@ Build directory for the kernel
place for the output files (including .config). place for the output files (including .config).
Example:: Example::
kernel source code: /usr/src/linux-4.X kernel source code: /usr/src/linux-5.x
build directory: /home/name/build/kernel build directory: /home/name/build/kernel
To configure and build the kernel, use:: To configure and build the kernel, use::
cd /usr/src/linux-4.X cd /usr/src/linux-5.x
make O=/home/name/build/kernel menuconfig make O=/home/name/build/kernel menuconfig
make O=/home/name/build/kernel make O=/home/name/build/kernel
sudo make O=/home/name/build/kernel modules_install install sudo make O=/home/name/build/kernel modules_install install

View File

@ -533,16 +533,12 @@ Bridge VLAN filtering
function that the driver has to call for each VLAN the given port is a member function that the driver has to call for each VLAN the given port is a member
of. A switchdev object is used to carry the VID and bridge flags. of. A switchdev object is used to carry the VID and bridge flags.
- port_fdb_prepare: bridge layer function invoked when the bridge prepares the
installation of a Forwarding Database entry. If the operation is not
supported, this function should return -EOPNOTSUPP to inform the bridge code
to fallback to a software implementation. No hardware setup must be done in
this function. See port_fdb_add for this and details.
- port_fdb_add: bridge layer function invoked when the bridge wants to install a - port_fdb_add: bridge layer function invoked when the bridge wants to install a
Forwarding Database entry, the switch hardware should be programmed with the Forwarding Database entry, the switch hardware should be programmed with the
specified address in the specified VLAN Id in the forwarding database specified address in the specified VLAN Id in the forwarding database
associated with this VLAN ID associated with this VLAN ID. If the operation is not supported, this
function should return -EOPNOTSUPP to inform the bridge code to fallback to
a software implementation.
Note: VLAN ID 0 corresponds to the port private database, which, in the context Note: VLAN ID 0 corresponds to the port private database, which, in the context
of DSA, would be the its port-based VLAN, used by the associated bridge device. of DSA, would be the its port-based VLAN, used by the associated bridge device.

View File

@ -7,7 +7,7 @@ Intro
===== =====
The MSG_ZEROCOPY flag enables copy avoidance for socket send calls. The MSG_ZEROCOPY flag enables copy avoidance for socket send calls.
The feature is currently implemented for TCP sockets. The feature is currently implemented for TCP and UDP sockets.
Opportunity and Caveats Opportunity and Caveats

View File

@ -92,11 +92,11 @@ device.
Switch ID Switch ID
^^^^^^^^^ ^^^^^^^^^
The switchdev driver must implement the switchdev op switchdev_port_attr_get The switchdev driver must implement the net_device operation
for SWITCHDEV_ATTR_ID_PORT_PARENT_ID for each port netdev, returning the same ndo_get_port_parent_id for each port netdev, returning the same physical ID for
physical ID for each port of a switch. The ID must be unique between switches each port of a switch. The ID must be unique between switches on the same
on the same system. The ID does not need to be unique between switches on system. The ID does not need to be unique between switches on different
different systems. systems.
The switch ID is used to locate ports on a switch and to know if aggregated The switch ID is used to locate ports on a switch and to know if aggregated
ports belong to the same switch. ports belong to the same switch.

View File

@ -216,14 +216,14 @@ You can use the ``interdiff`` program (http://cyberelk.net/tim/patchutils/) to
generate a patch representing the differences between two patches and then generate a patch representing the differences between two patches and then
apply the result. apply the result.
This will let you move from something like 4.7.2 to 4.7.3 in a single This will let you move from something like 5.7.2 to 5.7.3 in a single
step. The -z flag to interdiff will even let you feed it patches in gzip or step. The -z flag to interdiff will even let you feed it patches in gzip or
bzip2 compressed form directly without the use of zcat or bzcat or manual bzip2 compressed form directly without the use of zcat or bzcat or manual
decompression. decompression.
Here's how you'd go from 4.7.2 to 4.7.3 in a single step:: Here's how you'd go from 5.7.2 to 5.7.3 in a single step::
interdiff -z ../patch-4.7.2.gz ../patch-4.7.3.gz | patch -p1 interdiff -z ../patch-5.7.2.gz ../patch-5.7.3.gz | patch -p1
Although interdiff may save you a step or two you are generally advised to Although interdiff may save you a step or two you are generally advised to
do the additional steps since interdiff can get things wrong in some cases. do the additional steps since interdiff can get things wrong in some cases.
@ -245,62 +245,67 @@ The patches are available at http://kernel.org/
Most recent patches are linked from the front page, but they also have Most recent patches are linked from the front page, but they also have
specific homes. specific homes.
The 4.x.y (-stable) and 4.x patches live at The 5.x.y (-stable) and 5.x patches live at
https://www.kernel.org/pub/linux/kernel/v4.x/ https://www.kernel.org/pub/linux/kernel/v5.x/
The -rc patches live at The -rc patches are not stored on the webserver but are generated on
demand from git tags such as
https://www.kernel.org/pub/linux/kernel/v4.x/testing/ https://git.kernel.org/torvalds/p/v5.1-rc1/v5.0
The stable -rc patches live at
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/
The 4.x kernels The 5.x kernels
=============== ===============
These are the base stable releases released by Linus. The highest numbered These are the base stable releases released by Linus. The highest numbered
release is the most recent. release is the most recent.
If regressions or other serious flaws are found, then a -stable fix patch If regressions or other serious flaws are found, then a -stable fix patch
will be released (see below) on top of this base. Once a new 4.x base will be released (see below) on top of this base. Once a new 5.x base
kernel is released, a patch is made available that is a delta between the kernel is released, a patch is made available that is a delta between the
previous 4.x kernel and the new one. previous 5.x kernel and the new one.
To apply a patch moving from 4.6 to 4.7, you'd do the following (note To apply a patch moving from 5.6 to 5.7, you'd do the following (note
that such patches do **NOT** apply on top of 4.x.y kernels but on top of the that such patches do **NOT** apply on top of 5.x.y kernels but on top of the
base 4.x kernel -- if you need to move from 4.x.y to 4.x+1 you need to base 5.x kernel -- if you need to move from 5.x.y to 5.x+1 you need to
first revert the 4.x.y patch). first revert the 5.x.y patch).
Here are some examples:: Here are some examples::
# moving from 4.6 to 4.7 # moving from 5.6 to 5.7
$ cd ~/linux-4.6 # change to kernel source dir $ cd ~/linux-5.6 # change to kernel source dir
$ patch -p1 < ../patch-4.7 # apply the 4.7 patch $ patch -p1 < ../patch-5.7 # apply the 5.7 patch
$ cd .. $ cd ..
$ mv linux-4.6 linux-4.7 # rename source dir $ mv linux-5.6 linux-5.7 # rename source dir
# moving from 4.6.1 to 4.7 # moving from 5.6.1 to 5.7
$ cd ~/linux-4.6.1 # change to kernel source dir $ cd ~/linux-5.6.1 # change to kernel source dir
$ patch -p1 -R < ../patch-4.6.1 # revert the 4.6.1 patch $ patch -p1 -R < ../patch-5.6.1 # revert the 5.6.1 patch
# source dir is now 4.6 # source dir is now 5.6
$ patch -p1 < ../patch-4.7 # apply new 4.7 patch $ patch -p1 < ../patch-5.7 # apply new 5.7 patch
$ cd .. $ cd ..
$ mv linux-4.6.1 linux-4.7 # rename source dir $ mv linux-5.6.1 linux-5.7 # rename source dir
The 4.x.y kernels The 5.x.y kernels
================= =================
Kernels with 3-digit versions are -stable kernels. They contain small(ish) Kernels with 3-digit versions are -stable kernels. They contain small(ish)
critical fixes for security problems or significant regressions discovered critical fixes for security problems or significant regressions discovered
in a given 4.x kernel. in a given 5.x kernel.
This is the recommended branch for users who want the most recent stable This is the recommended branch for users who want the most recent stable
kernel and are not interested in helping test development/experimental kernel and are not interested in helping test development/experimental
versions. versions.
If no 4.x.y kernel is available, then the highest numbered 4.x kernel is If no 5.x.y kernel is available, then the highest numbered 5.x kernel is
the current stable kernel. the current stable kernel.
.. note:: .. note::
@ -308,23 +313,23 @@ the current stable kernel.
The -stable team usually do make incremental patches available as well The -stable team usually do make incremental patches available as well
as patches against the latest mainline release, but I only cover the as patches against the latest mainline release, but I only cover the
non-incremental ones below. The incremental ones can be found at non-incremental ones below. The incremental ones can be found at
https://www.kernel.org/pub/linux/kernel/v4.x/incr/ https://www.kernel.org/pub/linux/kernel/v5.x/incr/
These patches are not incremental, meaning that for example the 4.7.3 These patches are not incremental, meaning that for example the 5.7.3
patch does not apply on top of the 4.7.2 kernel source, but rather on top patch does not apply on top of the 5.7.2 kernel source, but rather on top
of the base 4.7 kernel source. of the base 5.7 kernel source.
So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel So, in order to apply the 5.7.3 patch to your existing 5.7.2 kernel
source you have to first back out the 4.7.2 patch (so you are left with a source you have to first back out the 5.7.2 patch (so you are left with a
base 4.7 kernel source) and then apply the new 4.7.3 patch. base 5.7 kernel source) and then apply the new 5.7.3 patch.
Here's a small example:: Here's a small example::
$ cd ~/linux-4.7.2 # change to the kernel source dir $ cd ~/linux-5.7.2 # change to the kernel source dir
$ patch -p1 -R < ../patch-4.7.2 # revert the 4.7.2 patch $ patch -p1 -R < ../patch-5.7.2 # revert the 5.7.2 patch
$ patch -p1 < ../patch-4.7.3 # apply the new 4.7.3 patch $ patch -p1 < ../patch-5.7.3 # apply the new 5.7.3 patch
$ cd .. $ cd ..
$ mv linux-4.7.2 linux-4.7.3 # rename the kernel source dir $ mv linux-5.7.2 linux-5.7.3 # rename the kernel source dir
The -rc kernels The -rc kernels
=============== ===============
@ -343,38 +348,38 @@ This is a good branch to run for people who want to help out testing
development kernels but do not want to run some of the really experimental development kernels but do not want to run some of the really experimental
stuff (such people should see the sections about -next and -mm kernels below). stuff (such people should see the sections about -next and -mm kernels below).
The -rc patches are not incremental, they apply to a base 4.x kernel, just The -rc patches are not incremental, they apply to a base 5.x kernel, just
like the 4.x.y patches described above. The kernel version before the -rcN like the 5.x.y patches described above. The kernel version before the -rcN
suffix denotes the version of the kernel that this -rc kernel will eventually suffix denotes the version of the kernel that this -rc kernel will eventually
turn into. turn into.
So, 4.8-rc5 means that this is the fifth release candidate for the 4.8 So, 5.8-rc5 means that this is the fifth release candidate for the 5.8
kernel and the patch should be applied on top of the 4.7 kernel source. kernel and the patch should be applied on top of the 5.7 kernel source.
Here are 3 examples of how to apply these patches:: Here are 3 examples of how to apply these patches::
# first an example of moving from 4.7 to 4.8-rc3 # first an example of moving from 5.7 to 5.8-rc3
$ cd ~/linux-4.7 # change to the 4.7 source dir $ cd ~/linux-5.7 # change to the 5.7 source dir
$ patch -p1 < ../patch-4.8-rc3 # apply the 4.8-rc3 patch $ patch -p1 < ../patch-5.8-rc3 # apply the 5.8-rc3 patch
$ cd .. $ cd ..
$ mv linux-4.7 linux-4.8-rc3 # rename the source dir $ mv linux-5.7 linux-5.8-rc3 # rename the source dir
# now let's move from 4.8-rc3 to 4.8-rc5 # now let's move from 5.8-rc3 to 5.8-rc5
$ cd ~/linux-4.8-rc3 # change to the 4.8-rc3 dir $ cd ~/linux-5.8-rc3 # change to the 5.8-rc3 dir
$ patch -p1 -R < ../patch-4.8-rc3 # revert the 4.8-rc3 patch $ patch -p1 -R < ../patch-5.8-rc3 # revert the 5.8-rc3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply the new 4.8-rc5 patch $ patch -p1 < ../patch-5.8-rc5 # apply the new 5.8-rc5 patch
$ cd .. $ cd ..
$ mv linux-4.8-rc3 linux-4.8-rc5 # rename the source dir $ mv linux-5.8-rc3 linux-5.8-rc5 # rename the source dir
# finally let's try and move from 4.7.3 to 4.8-rc5 # finally let's try and move from 5.7.3 to 5.8-rc5
$ cd ~/linux-4.7.3 # change to the kernel source dir $ cd ~/linux-5.7.3 # change to the kernel source dir
$ patch -p1 -R < ../patch-4.7.3 # revert the 4.7.3 patch $ patch -p1 -R < ../patch-5.7.3 # revert the 5.7.3 patch
$ patch -p1 < ../patch-4.8-rc5 # apply new 4.8-rc5 patch $ patch -p1 < ../patch-5.8-rc5 # apply new 5.8-rc5 patch
$ cd .. $ cd ..
$ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir $ mv linux-5.7.3 linux-5.8-rc5 # rename the kernel source dir
The -mm patches and the linux-next tree The -mm patches and the linux-next tree

View File

@ -4,7 +4,7 @@
.. _it_readme: .. _it_readme:
Rilascio del kernel Linux 4.x <http://kernel.org/> Rilascio del kernel Linux 5.x <http://kernel.org/>
=================================================== ===================================================
.. warning:: .. warning::

View File

@ -409,8 +409,7 @@ F: drivers/platform/x86/wmi.c
F: include/uapi/linux/wmi.h F: include/uapi/linux/wmi.h
AD1889 ALSA SOUND DRIVER AD1889 ALSA SOUND DRIVER
M: Thibaut Varene <T-Bone@parisc-linux.org> W: https://parisc.wiki.kernel.org/index.php/AD1889
W: http://wiki.parisc-linux.org/AD1889
L: linux-parisc@vger.kernel.org L: linux-parisc@vger.kernel.org
S: Maintained S: Maintained
F: sound/pci/ad1889.* F: sound/pci/ad1889.*
@ -2865,7 +2864,7 @@ R: Martin KaFai Lau <kafai@fb.com>
R: Song Liu <songliubraving@fb.com> R: Song Liu <songliubraving@fb.com>
R: Yonghong Song <yhs@fb.com> R: Yonghong Song <yhs@fb.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-kernel@vger.kernel.org L: bpf@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147
@ -2895,6 +2894,7 @@ N: bpf
BPF JIT for ARM BPF JIT for ARM
M: Shubham Bansal <illusionist.neo@gmail.com> M: Shubham Bansal <illusionist.neo@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/arm/net/ F: arch/arm/net/
@ -2903,18 +2903,21 @@ M: Daniel Borkmann <daniel@iogearbox.net>
M: Alexei Starovoitov <ast@kernel.org> M: Alexei Starovoitov <ast@kernel.org>
M: Zi Shen Lim <zlim.lnx@gmail.com> M: Zi Shen Lim <zlim.lnx@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: arch/arm64/net/ F: arch/arm64/net/
BPF JIT for MIPS (32-BIT AND 64-BIT) BPF JIT for MIPS (32-BIT AND 64-BIT)
M: Paul Burton <paul.burton@mips.com> M: Paul Burton <paul.burton@mips.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/mips/net/ F: arch/mips/net/
BPF JIT for NFP NICs BPF JIT for NFP NICs
M: Jakub Kicinski <jakub.kicinski@netronome.com> M: Jakub Kicinski <jakub.kicinski@netronome.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: drivers/net/ethernet/netronome/nfp/bpf/ F: drivers/net/ethernet/netronome/nfp/bpf/
@ -2922,6 +2925,7 @@ BPF JIT for POWERPC (32-BIT AND 64-BIT)
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
M: Sandipan Das <sandipan@linux.ibm.com> M: Sandipan Das <sandipan@linux.ibm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/powerpc/net/ F: arch/powerpc/net/
@ -2929,6 +2933,7 @@ BPF JIT for S390
M: Martin Schwidefsky <schwidefsky@de.ibm.com> M: Martin Schwidefsky <schwidefsky@de.ibm.com>
M: Heiko Carstens <heiko.carstens@de.ibm.com> M: Heiko Carstens <heiko.carstens@de.ibm.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/s390/net/ F: arch/s390/net/
X: arch/s390/net/pnet.c X: arch/s390/net/pnet.c
@ -2936,12 +2941,14 @@ X: arch/s390/net/pnet.c
BPF JIT for SPARC (32-BIT AND 64-BIT) BPF JIT for SPARC (32-BIT AND 64-BIT)
M: David S. Miller <davem@davemloft.net> M: David S. Miller <davem@davemloft.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/sparc/net/ F: arch/sparc/net/
BPF JIT for X86 32-BIT BPF JIT for X86 32-BIT
M: Wang YanQing <udknight@gmail.com> M: Wang YanQing <udknight@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: arch/x86/net/bpf_jit_comp32.c F: arch/x86/net/bpf_jit_comp32.c
@ -2949,6 +2956,7 @@ BPF JIT for X86 64-BIT
M: Alexei Starovoitov <ast@kernel.org> M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net> M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: arch/x86/net/ F: arch/x86/net/
X: arch/x86/net/bpf_jit_comp32.c X: arch/x86/net/bpf_jit_comp32.c
@ -3403,9 +3411,8 @@ F: Documentation/media/v4l-drivers/cafe_ccic*
F: drivers/media/platform/marvell-ccic/ F: drivers/media/platform/marvell-ccic/
CAIF NETWORK LAYER CAIF NETWORK LAYER
M: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Orphan
F: Documentation/networking/caif/ F: Documentation/networking/caif/
F: drivers/net/caif/ F: drivers/net/caif/
F: include/uapi/linux/caif/ F: include/uapi/linux/caif/
@ -8524,6 +8531,7 @@ L7 BPF FRAMEWORK
M: John Fastabend <john.fastabend@gmail.com> M: John Fastabend <john.fastabend@gmail.com>
M: Daniel Borkmann <daniel@iogearbox.net> M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: include/linux/skmsg.h F: include/linux/skmsg.h
F: net/core/skmsg.c F: net/core/skmsg.c
@ -11525,7 +11533,7 @@ F: Documentation/blockdev/paride.txt
F: drivers/block/paride/ F: drivers/block/paride/
PARISC ARCHITECTURE PARISC ARCHITECTURE
M: "James E.J. Bottomley" <jejb@parisc-linux.org> M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
M: Helge Deller <deller@gmx.de> M: Helge Deller <deller@gmx.de>
L: linux-parisc@vger.kernel.org L: linux-parisc@vger.kernel.org
W: http://www.parisc-linux.org/ W: http://www.parisc-linux.org/
@ -16751,6 +16759,7 @@ M: Jesper Dangaard Brouer <hawk@kernel.org>
M: John Fastabend <john.fastabend@gmail.com> M: John Fastabend <john.fastabend@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: xdp-newbies@vger.kernel.org L: xdp-newbies@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported S: Supported
F: net/core/xdp.c F: net/core/xdp.c
F: include/net/xdp.h F: include/net/xdp.h
@ -16764,6 +16773,7 @@ XDP SOCKETS (AF_XDP)
M: Björn Töpel <bjorn.topel@intel.com> M: Björn Töpel <bjorn.topel@intel.com>
M: Magnus Karlsson <magnus.karlsson@intel.com> M: Magnus Karlsson <magnus.karlsson@intel.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained S: Maintained
F: kernel/bpf/xskmap.c F: kernel/bpf/xskmap.c
F: net/xdp/ F: net/xdp/

View File

@ -2,7 +2,7 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 0 PATCHLEVEL = 0
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc7 EXTRAVERSION =
NAME = Shy Crocodile NAME = Shy Crocodile
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -191,7 +191,6 @@ config NR_CPUS
config ARC_SMP_HALT_ON_RESET config ARC_SMP_HALT_ON_RESET
bool "Enable Halt-on-reset boot mode" bool "Enable Halt-on-reset boot mode"
default y if ARC_UBOOT_SUPPORT
help help
In SMP configuration cores can be configured as Halt-on-reset In SMP configuration cores can be configured as Halt-on-reset
or they could all start at same time. For Halt-on-reset, non or they could all start at same time. For Halt-on-reset, non
@ -407,6 +406,14 @@ config ARC_HAS_ACCL_REGS
(also referred to as r58:r59). These can also be used by gcc as GPR so (also referred to as r58:r59). These can also be used by gcc as GPR so
kernel needs to save/restore per process kernel needs to save/restore per process
config ARC_IRQ_NO_AUTOSAVE
bool "Disable hardware autosave regfile on interrupts"
default n
help
On HS cores, taken interrupt auto saves the regfile on stack.
This is programmable and can be optionally disabled in which case
software INTERRUPT_PROLOGUE/EPILGUE do the needed work
endif # ISA_ARCV2 endif # ISA_ARCV2
endmenu # "ARC CPU Configuration" endmenu # "ARC CPU Configuration"
@ -515,17 +522,6 @@ config ARC_DBG_TLB_PARANOIA
endif endif
config ARC_UBOOT_SUPPORT
bool "Support uboot arg Handling"
help
ARC Linux by default checks for uboot provided args as pointers to
external cmdline or DTB. This however breaks in absence of uboot,
when booting from Metaware debugger directly, as the registers are
not zeroed out on reset by mdb and/or ARCv2 based cores. The bogus
registers look like uboot args to kernel which then chokes.
So only enable the uboot arg checking/processing if users are sure
of uboot being in play.
config ARC_BUILTIN_DTB_NAME config ARC_BUILTIN_DTB_NAME
string "Built in DTB" string "Built in DTB"
help help

View File

@ -31,7 +31,6 @@ CONFIG_ARC_CACHE_LINE_SHIFT=5
# CONFIG_ARC_HAS_LLSC is not set # CONFIG_ARC_HAS_LLSC is not set
CONFIG_ARC_KVADDR_SIZE=402 CONFIG_ARC_KVADDR_SIZE=402
CONFIG_ARC_EMUL_UNALIGNED=y CONFIG_ARC_EMUL_UNALIGNED=y
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_NET=y CONFIG_NET=y
CONFIG_UNIX=y CONFIG_UNIX=y

View File

@ -13,7 +13,6 @@ CONFIG_PARTITION_ADVANCED=y
CONFIG_ARC_PLAT_AXS10X=y CONFIG_ARC_PLAT_AXS10X=y
CONFIG_AXS103=y CONFIG_AXS103=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38" CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38"
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_NET=y CONFIG_NET=y

View File

@ -15,8 +15,6 @@ CONFIG_AXS103=y
CONFIG_ISA_ARCV2=y CONFIG_ISA_ARCV2=y
CONFIG_SMP=y CONFIG_SMP=y
# CONFIG_ARC_TIMERS_64BIT is not set # CONFIG_ARC_TIMERS_64BIT is not set
# CONFIG_ARC_SMP_HALT_ON_RESET is not set
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp" CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp"
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_NET=y CONFIG_NET=y

View File

@ -151,6 +151,14 @@ struct bcr_isa_arcv2 {
#endif #endif
}; };
struct bcr_uarch_build_arcv2 {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:8, prod:8, maj:8, min:8;
#else
unsigned int min:8, maj:8, prod:8, pad:8;
#endif
};
struct bcr_mpy { struct bcr_mpy {
#ifdef CONFIG_CPU_BIG_ENDIAN #ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:8, x1616:8, dsp:4, cycles:2, type:2, ver:8; unsigned int pad:8, x1616:8, dsp:4, cycles:2, type:2, ver:8;

View File

@ -52,6 +52,17 @@
#define cache_line_size() SMP_CACHE_BYTES #define cache_line_size() SMP_CACHE_BYTES
#define ARCH_DMA_MINALIGN SMP_CACHE_BYTES #define ARCH_DMA_MINALIGN SMP_CACHE_BYTES
/*
* Make sure slab-allocated buffers are 64-bit aligned when atomic64_t uses
* ARCv2 64-bit atomics (LLOCKD/SCONDD). This guarantess runtime 64-bit
* alignment for any atomic64_t embedded in buffer.
* Default ARCH_SLAB_MINALIGN is __alignof__(long long) which has a relaxed
* value of 4 (and not 8) in ARC ABI.
*/
#if defined(CONFIG_ARC_HAS_LL64) && defined(CONFIG_ARC_HAS_LLSC)
#define ARCH_SLAB_MINALIGN 8
#endif
extern void arc_cache_init(void); extern void arc_cache_init(void);
extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len);
extern void read_decode_cache_bcr(void); extern void read_decode_cache_bcr(void);

View File

@ -17,6 +17,33 @@
; ;
; Now manually save: r12, sp, fp, gp, r25 ; Now manually save: r12, sp, fp, gp, r25
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
st.as r9, [sp, -10] ; save r9 in it's final stack slot
sub sp, sp, 12 ; skip JLI, LDI, EI
PUSH lp_count
PUSHAX lp_start
PUSHAX lp_end
PUSH blink
PUSH r11
PUSH r10
sub sp, sp, 4 ; skip r9
PUSH r8
PUSH r7
PUSH r6
PUSH r5
PUSH r4
PUSH r3
PUSH r2
PUSH r1
PUSH r0
.endif
#endif
#ifdef CONFIG_ARC_HAS_ACCL_REGS #ifdef CONFIG_ARC_HAS_ACCL_REGS
PUSH r59 PUSH r59
PUSH r58 PUSH r58
@ -86,6 +113,33 @@
POP r59 POP r59
#endif #endif
#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE
.ifnc \called_from, exception
POP r0
POP r1
POP r2
POP r3
POP r4
POP r5
POP r6
POP r7
POP r8
POP r9
POP r10
POP r11
POP blink
POPAX lp_end
POPAX lp_start
POP r9
mov lp_count, r9
add sp, sp, 12 ; skip JLI, LDI, EI
ld.as r9, [sp, -10] ; reload r9 which got clobbered
.endif
#endif
.endm .endm
/*------------------------------------------------------------------------*/ /*------------------------------------------------------------------------*/

View File

@ -207,7 +207,7 @@ raw_copy_from_user(void *to, const void __user *from, unsigned long n)
*/ */
"=&r" (tmp), "+r" (to), "+r" (from) "=&r" (tmp), "+r" (to), "+r" (from)
: :
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return n; return n;
} }
@ -433,7 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n)
*/ */
"=&r" (tmp), "+r" (to), "+r" (from) "=&r" (tmp), "+r" (to), "+r" (from)
: :
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return n; return n;
} }
@ -653,7 +653,7 @@ static inline unsigned long __arc_clear_user(void __user *to, unsigned long n)
" .previous \n" " .previous \n"
: "+r"(d_char), "+r"(res) : "+r"(d_char), "+r"(res)
: "i"(0) : "i"(0)
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return res; return res;
} }
@ -686,7 +686,7 @@ __arc_strncpy_from_user(char *dst, const char __user *src, long count)
" .previous \n" " .previous \n"
: "+r"(res), "+r"(dst), "+r"(src), "=r"(val) : "+r"(res), "+r"(dst), "+r"(src), "=r"(val)
: "g"(-EFAULT), "r"(count) : "g"(-EFAULT), "r"(count)
: "lp_count", "lp_start", "lp_end", "memory"); : "lp_count", "memory");
return res; return res;
} }

View File

@ -209,7 +209,9 @@ restore_regs:
;####### Return from Intr ####### ;####### Return from Intr #######
debug_marker_l1: debug_marker_l1:
bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot ; bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot
btst r0, STATUS_DE_BIT ; Z flag set if bit clear
bnz .Lintr_ret_to_delay_slot ; branch if STATUS_DE_BIT set
.Lisr_ret_fast_path: .Lisr_ret_fast_path:
; Handle special case #1: (Entry via Exception, Return via IRQ) ; Handle special case #1: (Entry via Exception, Return via IRQ)

View File

@ -17,6 +17,7 @@
#include <asm/entry.h> #include <asm/entry.h>
#include <asm/arcregs.h> #include <asm/arcregs.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/irqflags.h>
.macro CPU_EARLY_SETUP .macro CPU_EARLY_SETUP
@ -47,6 +48,15 @@
sr r5, [ARC_REG_DC_CTRL] sr r5, [ARC_REG_DC_CTRL]
1: 1:
#ifdef CONFIG_ISA_ARCV2
; Unaligned access is disabled at reset, so re-enable early as
; gcc 7.3.1 (ARC GNU 2018.03) onwards generates unaligned access
; by default
lr r5, [status32]
bset r5, r5, STATUS_AD_BIT
kflag r5
#endif
.endm .endm
.section .init.text, "ax",@progbits .section .init.text, "ax",@progbits
@ -90,15 +100,13 @@ ENTRY(stext)
st.ab 0, [r5, 4] st.ab 0, [r5, 4]
1: 1:
#ifdef CONFIG_ARC_UBOOT_SUPPORT
; Uboot - kernel ABI ; Uboot - kernel ABI
; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2 ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2
; r1 = magic number (board identity, unused as of now ; r1 = magic number (always zero as of now)
; r2 = pointer to uboot provided cmdline or external DTB in mem ; r2 = pointer to uboot provided cmdline or external DTB in mem
; These are handled later in setup_arch() ; These are handled later in handle_uboot_args()
st r0, [@uboot_tag] st r0, [@uboot_tag]
st r2, [@uboot_arg] st r2, [@uboot_arg]
#endif
; setup "current" tsk and optionally cache it in dedicated r25 ; setup "current" tsk and optionally cache it in dedicated r25
mov r9, @init_task mov r9, @init_task

View File

@ -49,11 +49,13 @@ void arc_init_IRQ(void)
*(unsigned int *)&ictrl = 0; *(unsigned int *)&ictrl = 0;
#ifndef CONFIG_ARC_IRQ_NO_AUTOSAVE
ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */ ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */
ictrl.save_blink = 1; ictrl.save_blink = 1;
ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */ ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */
ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */ ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */
ictrl.save_idx_regs = 1; /* JLI, LDI, EI */ ictrl.save_idx_regs = 1; /* JLI, LDI, EI */
#endif
WRITE_AUX(AUX_IRQ_CTRL, ictrl); WRITE_AUX(AUX_IRQ_CTRL, ictrl);

View File

@ -199,20 +199,36 @@ static void read_arc_build_cfg_regs(void)
cpu->bpu.ret_stk = 4 << bpu.rse; cpu->bpu.ret_stk = 4 << bpu.rse;
if (cpu->core.family >= 0x54) { if (cpu->core.family >= 0x54) {
struct bcr_uarch_build_arcv2 uarch;
/*
* The first 0x54 core (uarch maj:min 0:1 or 0:2) was
* dual issue only (HS4x). But next uarch rev (1:0)
* allows it be configured for single issue (HS3x)
* Ensure we fiddle with dual issue only on HS4x
*/
READ_BCR(ARC_REG_MICRO_ARCH_BCR, uarch);
if (uarch.prod == 4) {
unsigned int exec_ctrl; unsigned int exec_ctrl;
/* dual issue hardware always present */
cpu->extn.dual = 1;
READ_BCR(AUX_EXEC_CTRL, exec_ctrl); READ_BCR(AUX_EXEC_CTRL, exec_ctrl);
/* dual issue hardware enabled ? */
cpu->extn.dual_enb = !(exec_ctrl & 1); cpu->extn.dual_enb = !(exec_ctrl & 1);
/* dual issue always present for this core */ }
cpu->extn.dual = 1;
} }
} }
READ_BCR(ARC_REG_AP_BCR, ap); READ_BCR(ARC_REG_AP_BCR, ap);
if (ap.ver) { if (ap.ver) {
cpu->extn.ap_num = 2 << ap.num; cpu->extn.ap_num = 2 << ap.num;
cpu->extn.ap_full = !!ap.min; cpu->extn.ap_full = !ap.min;
} }
READ_BCR(ARC_REG_SMART_BCR, bcr); READ_BCR(ARC_REG_SMART_BCR, bcr);
@ -462,43 +478,78 @@ void setup_processor(void)
arc_chk_core_config(); arc_chk_core_config();
} }
static inline int is_kernel(unsigned long addr) static inline bool uboot_arg_invalid(unsigned long addr)
{ {
if (addr >= (unsigned long)_stext && addr <= (unsigned long)_end) /*
return 1; * Check that it is a untranslated address (although MMU is not enabled
return 0; * yet, it being a high address ensures this is not by fluke)
*/
if (addr < PAGE_OFFSET)
return true;
/* Check that address doesn't clobber resident kernel image */
return addr >= (unsigned long)_stext && addr <= (unsigned long)_end;
}
#define IGNORE_ARGS "Ignore U-boot args: "
/* uboot_tag values for U-boot - kernel ABI revision 0; see head.S */
#define UBOOT_TAG_NONE 0
#define UBOOT_TAG_CMDLINE 1
#define UBOOT_TAG_DTB 2
void __init handle_uboot_args(void)
{
bool use_embedded_dtb = true;
bool append_cmdline = false;
/* check that we know this tag */
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_tag != UBOOT_TAG_CMDLINE &&
uboot_tag != UBOOT_TAG_DTB) {
pr_warn(IGNORE_ARGS "invalid uboot tag: '%08x'\n", uboot_tag);
goto ignore_uboot_args;
}
if (uboot_tag != UBOOT_TAG_NONE &&
uboot_arg_invalid((unsigned long)uboot_arg)) {
pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg);
goto ignore_uboot_args;
}
/* see if U-boot passed an external Device Tree blob */
if (uboot_tag == UBOOT_TAG_DTB) {
machine_desc = setup_machine_fdt((void *)uboot_arg);
/* external Device Tree blob is invalid - use embedded one */
use_embedded_dtb = !machine_desc;
}
if (uboot_tag == UBOOT_TAG_CMDLINE)
append_cmdline = true;
ignore_uboot_args:
if (use_embedded_dtb) {
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
}
/*
* NOTE: @boot_command_line is populated by setup_machine_fdt() so this
* append processing can only happen after.
*/
if (append_cmdline) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg, COMMAND_LINE_SIZE);
}
} }
void __init setup_arch(char **cmdline_p) void __init setup_arch(char **cmdline_p)
{ {
#ifdef CONFIG_ARC_UBOOT_SUPPORT handle_uboot_args();
/* make sure that uboot passed pointer to cmdline/dtb is valid */
if (uboot_tag && is_kernel((unsigned long)uboot_arg))
panic("Invalid uboot arg\n");
/* See if u-boot passed an external Device Tree blob */
machine_desc = setup_machine_fdt(uboot_arg); /* uboot_tag == 2 */
if (!machine_desc)
#endif
{
/* No, so try the embedded one */
machine_desc = setup_machine_fdt(__dtb_start);
if (!machine_desc)
panic("Embedded DT invalid\n");
/*
* If we are here, it is established that @uboot_arg didn't
* point to DT blob. Instead if u-boot says it is cmdline,
* append to embedded DT cmdline.
* setup_machine_fdt() would have populated @boot_command_line
*/
if (uboot_tag == 1) {
/* Ensure a whitespace between the 2 cmdlines */
strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, uboot_arg,
COMMAND_LINE_SIZE);
}
}
/* Save unparsed command line copy for /proc/cmdline */ /* Save unparsed command line copy for /proc/cmdline */
*cmdline_p = boot_command_line; *cmdline_p = boot_command_line;

View File

@ -25,15 +25,11 @@
#endif #endif
#ifdef CONFIG_ARC_HAS_LL64 #ifdef CONFIG_ARC_HAS_LL64
# define PREFETCH_READ(RX) prefetch [RX, 56]
# define PREFETCH_WRITE(RX) prefetchw [RX, 64]
# define LOADX(DST,RX) ldd.ab DST, [RX, 8] # define LOADX(DST,RX) ldd.ab DST, [RX, 8]
# define STOREX(SRC,RX) std.ab SRC, [RX, 8] # define STOREX(SRC,RX) std.ab SRC, [RX, 8]
# define ZOLSHFT 5 # define ZOLSHFT 5
# define ZOLAND 0x1F # define ZOLAND 0x1F
#else #else
# define PREFETCH_READ(RX) prefetch [RX, 28]
# define PREFETCH_WRITE(RX) prefetchw [RX, 32]
# define LOADX(DST,RX) ld.ab DST, [RX, 4] # define LOADX(DST,RX) ld.ab DST, [RX, 4]
# define STOREX(SRC,RX) st.ab SRC, [RX, 4] # define STOREX(SRC,RX) st.ab SRC, [RX, 4]
# define ZOLSHFT 4 # define ZOLSHFT 4
@ -41,8 +37,6 @@
#endif #endif
ENTRY_CFI(memcpy) ENTRY_CFI(memcpy)
prefetch [r1] ; Prefetch the read location
prefetchw [r0] ; Prefetch the write location
mov.f 0, r2 mov.f 0, r2
;;; if size is zero ;;; if size is zero
jz.d [blink] jz.d [blink]
@ -72,8 +66,6 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy32_64bytes lpnz @.Lcopy32_64bytes
;; LOOP START ;; LOOP START
LOADX (r6, r1) LOADX (r6, r1)
PREFETCH_READ (r1)
PREFETCH_WRITE (r3)
LOADX (r8, r1) LOADX (r8, r1)
LOADX (r10, r1) LOADX (r10, r1)
LOADX (r4, r1) LOADX (r4, r1)
@ -117,9 +109,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_1 lpnz @.Lcopy8bytes_1
;; LOOP START ;; LOOP START
ld.ab r6, [r1, 4] ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4] ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 24) SHIFT_1 (r7, r6, 24)
or r7, r7, r5 or r7, r7, r5
@ -162,9 +152,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_2 lpnz @.Lcopy8bytes_2
;; LOOP START ;; LOOP START
ld.ab r6, [r1, 4] ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4] ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 16) SHIFT_1 (r7, r6, 16)
or r7, r7, r5 or r7, r7, r5
@ -204,9 +192,7 @@ ENTRY_CFI(memcpy)
lpnz @.Lcopy8bytes_3 lpnz @.Lcopy8bytes_3
;; LOOP START ;; LOOP START
ld.ab r6, [r1, 4] ld.ab r6, [r1, 4]
prefetch [r1, 28] ;Prefetch the next read location
ld.ab r8, [r1,4] ld.ab r8, [r1,4]
prefetchw [r3, 32] ;Prefetch the next write location
SHIFT_1 (r7, r6, 8) SHIFT_1 (r7, r6, 8)
or r7, r7, r5 or r7, r7, r5

View File

@ -9,6 +9,7 @@ menuconfig ARC_SOC_HSDK
bool "ARC HS Development Kit SOC" bool "ARC HS Development Kit SOC"
depends on ISA_ARCV2 depends on ISA_ARCV2
select ARC_HAS_ACCL_REGS select ARC_HAS_ACCL_REGS
select ARC_IRQ_NO_AUTOSAVE
select CLK_HSDK select CLK_HSDK
select RESET_HSDK select RESET_HSDK
select HAVE_PCI select HAVE_PCI

View File

@ -1400,6 +1400,7 @@ config NR_CPUS
config HOTPLUG_CPU config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs" bool "Support for hot-pluggable CPUs"
depends on SMP depends on SMP
select GENERIC_IRQ_MIGRATION
help help
Say Y here to experiment with turning CPUs off and on. CPUs Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu. can be controlled through /sys/devices/system/cpu.

View File

@ -729,7 +729,7 @@
&cpsw_emac0 { &cpsw_emac0 {
phy-handle = <&ethphy0>; phy-handle = <&ethphy0>;
phy-mode = "rgmii-txid"; phy-mode = "rgmii-id";
}; };
&tscadc { &tscadc {

View File

@ -651,13 +651,13 @@
&cpsw_emac0 { &cpsw_emac0 {
phy-handle = <&ethphy0>; phy-handle = <&ethphy0>;
phy-mode = "rgmii-txid"; phy-mode = "rgmii-id";
dual_emac_res_vlan = <1>; dual_emac_res_vlan = <1>;
}; };
&cpsw_emac1 { &cpsw_emac1 {
phy-handle = <&ethphy1>; phy-handle = <&ethphy1>;
phy-mode = "rgmii-txid"; phy-mode = "rgmii-id";
dual_emac_res_vlan = <2>; dual_emac_res_vlan = <2>;
}; };

View File

@ -144,11 +144,13 @@
status = "okay"; status = "okay";
}; };
nand@d0000 { nand-controller@d0000 {
status = "okay"; status = "okay";
nand@0 {
reg = <0>;
label = "pxa3xx_nand-0"; label = "pxa3xx_nand-0";
num-cs = <1>; nand-rb = <0>;
marvell,nand-keep-config;
nand-on-flash-bbt; nand-on-flash-bbt;
partitions { partitions {
@ -167,7 +169,7 @@
partition@1000000 { partition@1000000 {
label = "Filesystem"; label = "Filesystem";
reg = <0x1000000 0x3f000000>; reg = <0x1000000 0x3f000000>;
};
}; };
}; };
}; };

View File

@ -160,14 +160,17 @@
status = "okay"; status = "okay";
}; };
nand@d0000 { nand-controller@d0000 {
status = "okay"; status = "okay";
nand@0 {
reg = <0>;
label = "pxa3xx_nand-0"; label = "pxa3xx_nand-0";
num-cs = <1>; nand-rb = <0>;
marvell,nand-keep-config;
nand-on-flash-bbt; nand-on-flash-bbt;
}; };
}; };
};
bm-bppi { bm-bppi {
status = "okay"; status = "okay";

View File

@ -81,11 +81,13 @@
}; };
nand@d0000 { nand-controller@d0000 {
status = "okay"; status = "okay";
nand@0 {
reg = <0>;
label = "pxa3xx_nand-0"; label = "pxa3xx_nand-0";
num-cs = <1>; nand-rb = <0>;
marvell,nand-keep-config;
nand-on-flash-bbt; nand-on-flash-bbt;
partitions { partitions {
@ -129,6 +131,7 @@
}; };
}; };
}; };
};
gpio-keys { gpio-keys {
compatible = "gpio-keys"; compatible = "gpio-keys";

View File

@ -443,7 +443,7 @@
}; };
display-controller@6a000000 { display-controller@6a000000 {
status = "disabled"; status = "okay";
port@0 { port@0 {
reg = <0>; reg = <0>;

View File

@ -13,10 +13,25 @@
stdout-path = "serial0:115200n8"; stdout-path = "serial0:115200n8";
}; };
memory@80000000 { /*
* Note that recent version of the device tree compiler (starting with
* version 1.4.2) warn about this node containing a reg property, but
* missing a unit-address. However, the bootloader on these Chromebook
* devices relies on the full name of this node to be exactly /memory.
* Adding the unit-address causes the bootloader to create a /memory
* node and write the memory bank configuration to that node, which in
* turn leads the kernel to believe that the device has 2 GiB of
* memory instead of the amount detected by the bootloader.
*
* The name of this node is effectively ABI and must not be changed.
*/
memory {
device_type = "memory";
reg = <0x0 0x80000000 0x0 0x80000000>; reg = <0x0 0x80000000 0x0 0x80000000>;
}; };
/delete-node/ memory@80000000;
host1x@50000000 { host1x@50000000 {
hdmi@54280000 { hdmi@54280000 {
status = "okay"; status = "okay";

View File

@ -212,10 +212,11 @@ K256:
.global sha256_block_data_order .global sha256_block_data_order
.type sha256_block_data_order,%function .type sha256_block_data_order,%function
sha256_block_data_order: sha256_block_data_order:
.Lsha256_block_data_order:
#if __ARM_ARCH__<7 #if __ARM_ARCH__<7
sub r3,pc,#8 @ sha256_block_data_order sub r3,pc,#8 @ sha256_block_data_order
#else #else
adr r3,sha256_block_data_order adr r3,.Lsha256_block_data_order
#endif #endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap ldr r12,.LOPENSSL_armcap

View File

@ -93,10 +93,11 @@ K256:
.global sha256_block_data_order .global sha256_block_data_order
.type sha256_block_data_order,%function .type sha256_block_data_order,%function
sha256_block_data_order: sha256_block_data_order:
.Lsha256_block_data_order:
#if __ARM_ARCH__<7 #if __ARM_ARCH__<7
sub r3,pc,#8 @ sha256_block_data_order sub r3,pc,#8 @ sha256_block_data_order
#else #else
adr r3,sha256_block_data_order adr r3,.Lsha256_block_data_order
#endif #endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap ldr r12,.LOPENSSL_armcap

View File

@ -274,10 +274,11 @@ WORD64(0x5fcb6fab,0x3ad6faec, 0x6c44198c,0x4a475817)
.global sha512_block_data_order .global sha512_block_data_order
.type sha512_block_data_order,%function .type sha512_block_data_order,%function
sha512_block_data_order: sha512_block_data_order:
.Lsha512_block_data_order:
#if __ARM_ARCH__<7 #if __ARM_ARCH__<7
sub r3,pc,#8 @ sha512_block_data_order sub r3,pc,#8 @ sha512_block_data_order
#else #else
adr r3,sha512_block_data_order adr r3,.Lsha512_block_data_order
#endif #endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap ldr r12,.LOPENSSL_armcap

View File

@ -141,10 +141,11 @@ WORD64(0x5fcb6fab,0x3ad6faec, 0x6c44198c,0x4a475817)
.global sha512_block_data_order .global sha512_block_data_order
.type sha512_block_data_order,%function .type sha512_block_data_order,%function
sha512_block_data_order: sha512_block_data_order:
.Lsha512_block_data_order:
#if __ARM_ARCH__<7 #if __ARM_ARCH__<7
sub r3,pc,#8 @ sha512_block_data_order sub r3,pc,#8 @ sha512_block_data_order
#else #else
adr r3,sha512_block_data_order adr r3,.Lsha512_block_data_order
#endif #endif
#if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__)
ldr r12,.LOPENSSL_armcap ldr r12,.LOPENSSL_armcap

View File

@ -25,7 +25,6 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
struct irqaction; struct irqaction;
struct pt_regs; struct pt_regs;
extern void migrate_irqs(void);
extern void asm_do_IRQ(unsigned int, struct pt_regs *); extern void asm_do_IRQ(unsigned int, struct pt_regs *);
void handle_IRQ(unsigned int, struct pt_regs *); void handle_IRQ(unsigned int, struct pt_regs *);

View File

@ -31,7 +31,6 @@
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/ratelimit.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
@ -109,64 +108,3 @@ int __init arch_probe_nr_irqs(void)
return nr_irqs; return nr_irqs;
} }
#endif #endif
#ifdef CONFIG_HOTPLUG_CPU
static bool migrate_one_irq(struct irq_desc *desc)
{
struct irq_data *d = irq_desc_get_irq_data(desc);
const struct cpumask *affinity = irq_data_get_affinity_mask(d);
struct irq_chip *c;
bool ret = false;
/*
* If this is a per-CPU interrupt, or the affinity does not
* include this CPU, then we have nothing to do.
*/
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
return false;
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
affinity = cpu_online_mask;
ret = true;
}
c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret)
cpumask_copy(irq_data_get_affinity_mask(d), affinity);
return ret;
}
/*
* The current CPU has been marked offline. Migrate IRQs off this CPU.
* If the affinity settings do not allow other CPUs, force them onto any
* available CPU.
*
* Note: we must iterate over all IRQs, whether they have an attached
* action structure or not, as we need to get chained interrupts too.
*/
void migrate_irqs(void)
{
unsigned int i;
struct irq_desc *desc;
unsigned long flags;
local_irq_save(flags);
for_each_irq_desc(i, desc) {
bool affinity_broken;
raw_spin_lock(&desc->lock);
affinity_broken = migrate_one_irq(desc);
raw_spin_unlock(&desc->lock);
if (affinity_broken)
pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n",
i, smp_processor_id());
}
local_irq_restore(flags);
}
#endif /* CONFIG_HOTPLUG_CPU */

View File

@ -254,7 +254,7 @@ int __cpu_disable(void)
/* /*
* OK - migrate IRQs away from this CPU * OK - migrate IRQs away from this CPU
*/ */
migrate_irqs(); irq_migrate_all_off_this_cpu();
/* /*
* Flush user cache and TLB mappings, and then remove this CPU * Flush user cache and TLB mappings, and then remove this CPU

View File

@ -2390,4 +2390,6 @@ void arch_teardown_dma_ops(struct device *dev)
return; return;
arm_teardown_iommu_dma_ops(dev); arm_teardown_iommu_dma_ops(dev);
/* Let arch_setup_dma_ops() start again from scratch upon re-probe */
set_dma_ops(dev, NULL);
} }

View File

@ -247,7 +247,7 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *or
} }
/* Copy arch-dep-instance from template. */ /* Copy arch-dep-instance from template. */
memcpy(code, (unsigned char *)optprobe_template_entry, memcpy(code, (unsigned long *)&optprobe_template_entry,
TMPL_END_IDX * sizeof(kprobe_opcode_t)); TMPL_END_IDX * sizeof(kprobe_opcode_t));
/* Adjust buffer according to instruction. */ /* Adjust buffer according to instruction. */

View File

@ -351,7 +351,7 @@
reg = <0>; reg = <0>;
pinctrl-names = "default"; pinctrl-names = "default";
pinctrl-0 = <&cp0_copper_eth_phy_reset>; pinctrl-0 = <&cp0_copper_eth_phy_reset>;
reset-gpios = <&cp1_gpio1 11 GPIO_ACTIVE_LOW>; reset-gpios = <&cp0_gpio2 11 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>; reset-assert-us = <10000>;
}; };

View File

@ -37,7 +37,7 @@
}; };
memory@86200000 { memory@86200000 {
reg = <0x0 0x86200000 0x0 0x2600000>; reg = <0x0 0x86200000 0x0 0x2d00000>;
no-map; no-map;
}; };

View File

@ -158,8 +158,8 @@ ENTRY(hchacha_block_neon)
mov w3, w2 mov w3, w2
bl chacha_permute bl chacha_permute
st1 {v0.16b}, [x1], #16 st1 {v0.4s}, [x1], #16
st1 {v3.16b}, [x1] st1 {v3.4s}, [x1]
ldp x29, x30, [sp], #16 ldp x29, x30, [sp], #16
ret ret
@ -532,6 +532,10 @@ ENTRY(chacha_4block_xor_neon)
add v3.4s, v3.4s, v19.4s add v3.4s, v3.4s, v19.4s
add a2, a2, w8 add a2, a2, w8
add a3, a3, w9 add a3, a3, w9
CPU_BE( rev a0, a0 )
CPU_BE( rev a1, a1 )
CPU_BE( rev a2, a2 )
CPU_BE( rev a3, a3 )
ld4r {v24.4s-v27.4s}, [x0], #16 ld4r {v24.4s-v27.4s}, [x0], #16
ld4r {v28.4s-v31.4s}, [x0] ld4r {v28.4s-v31.4s}, [x0]
@ -552,6 +556,10 @@ ENTRY(chacha_4block_xor_neon)
add v7.4s, v7.4s, v23.4s add v7.4s, v7.4s, v23.4s
add a6, a6, w8 add a6, a6, w8
add a7, a7, w9 add a7, a7, w9
CPU_BE( rev a4, a4 )
CPU_BE( rev a5, a5 )
CPU_BE( rev a6, a6 )
CPU_BE( rev a7, a7 )
// x8[0-3] += s2[0] // x8[0-3] += s2[0]
// x9[0-3] += s2[1] // x9[0-3] += s2[1]
@ -569,6 +577,10 @@ ENTRY(chacha_4block_xor_neon)
add v11.4s, v11.4s, v27.4s add v11.4s, v11.4s, v27.4s
add a10, a10, w8 add a10, a10, w8
add a11, a11, w9 add a11, a11, w9
CPU_BE( rev a8, a8 )
CPU_BE( rev a9, a9 )
CPU_BE( rev a10, a10 )
CPU_BE( rev a11, a11 )
// x12[0-3] += s3[0] // x12[0-3] += s3[0]
// x13[0-3] += s3[1] // x13[0-3] += s3[1]
@ -586,6 +598,10 @@ ENTRY(chacha_4block_xor_neon)
add v15.4s, v15.4s, v31.4s add v15.4s, v15.4s, v31.4s
add a14, a14, w8 add a14, a14, w8
add a15, a15, w9 add a15, a15, w9
CPU_BE( rev a12, a12 )
CPU_BE( rev a13, a13 )
CPU_BE( rev a14, a14 )
CPU_BE( rev a15, a15 )
// interleave 32-bit words in state n, n+1 // interleave 32-bit words in state n, n+1
ldp w6, w7, [x2], #64 ldp w6, w7, [x2], #64

View File

@ -36,4 +36,8 @@
#include <arm_neon.h> #include <arm_neon.h>
#endif #endif
#ifdef CONFIG_CC_IS_CLANG
#pragma clang diagnostic ignored "-Wincompatible-pointer-types"
#endif
#endif /* __ASM_NEON_INTRINSICS_H */ #endif /* __ASM_NEON_INTRINSICS_H */

View File

@ -539,8 +539,7 @@ set_hcr:
/* GICv3 system register access */ /* GICv3 system register access */
mrs x0, id_aa64pfr0_el1 mrs x0, id_aa64pfr0_el1
ubfx x0, x0, #24, #4 ubfx x0, x0, #24, #4
cmp x0, #1 cbz x0, 3f
b.ne 3f
mrs_s x0, SYS_ICC_SRE_EL2 mrs_s x0, SYS_ICC_SRE_EL2
orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1 orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1

View File

@ -1702,19 +1702,20 @@ void syscall_trace_exit(struct pt_regs *regs)
} }
/* /*
* SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a.
* We also take into account DIT (bit 24), which is not yet documented, and * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is
* treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be * not described in ARM DDI 0487D.a.
* allocated an EL0 meaning in future. * We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may
* be allocated an EL0 meaning in future.
* Userspace cannot use these until they have an architectural meaning. * Userspace cannot use these until they have an architectural meaning.
* Note that this follows the SPSR_ELx format, not the AArch32 PSR format. * Note that this follows the SPSR_ELx format, not the AArch32 PSR format.
* We also reserve IL for the kernel; SS is handled dynamically. * We also reserve IL for the kernel; SS is handled dynamically.
*/ */
#define SPSR_EL1_AARCH64_RES0_BITS \ #define SPSR_EL1_AARCH64_RES0_BITS \
(GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \ (GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \
GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5)) GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5))
#define SPSR_EL1_AARCH32_RES0_BITS \ #define SPSR_EL1_AARCH32_RES0_BITS \
(GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20)) (GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))
static int valid_compat_regs(struct user_pt_regs *regs) static int valid_compat_regs(struct user_pt_regs *regs)
{ {

View File

@ -339,6 +339,9 @@ void __init setup_arch(char **cmdline_p)
smp_init_cpus(); smp_init_cpus();
smp_build_mpidr_hash(); smp_build_mpidr_hash();
/* Init percpu seeds for random tags after cpus are set up. */
kasan_init_tags();
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
/* /*
* Make sure init_thread_info.ttbr0 always generates translation * Make sure init_thread_info.ttbr0 always generates translation

View File

@ -252,8 +252,6 @@ void __init kasan_init(void)
memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE);
cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
kasan_init_tags();
/* At this point kasan is fully initialized. Enable error messages */ /* At this point kasan is fully initialized. Enable error messages */
init_task.kasan_depth = 0; init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n"); pr_info("KernelAddressSanitizer initialized\n");

View File

@ -70,6 +70,8 @@ static struct platform_device bcm63xx_enet_shared_device = {
static int shared_device_registered; static int shared_device_registered;
static u64 enet_dmamask = DMA_BIT_MASK(32);
static struct resource enet0_res[] = { static struct resource enet0_res[] = {
{ {
.start = -1, /* filled at runtime */ .start = -1, /* filled at runtime */
@ -99,6 +101,8 @@ static struct platform_device bcm63xx_enet0_device = {
.resource = enet0_res, .resource = enet0_res,
.dev = { .dev = {
.platform_data = &enet0_pd, .platform_data = &enet0_pd,
.dma_mask = &enet_dmamask,
.coherent_dma_mask = DMA_BIT_MASK(32),
}, },
}; };
@ -131,6 +135,8 @@ static struct platform_device bcm63xx_enet1_device = {
.resource = enet1_res, .resource = enet1_res,
.dev = { .dev = {
.platform_data = &enet1_pd, .platform_data = &enet1_pd,
.dma_mask = &enet_dmamask,
.coherent_dma_mask = DMA_BIT_MASK(32),
}, },
}; };
@ -157,6 +163,8 @@ static struct platform_device bcm63xx_enetsw_device = {
.resource = enetsw_res, .resource = enetsw_res,
.dev = { .dev = {
.platform_data = &enetsw_pd, .platform_data = &enetsw_pd,
.dma_mask = &enet_dmamask,
.coherent_dma_mask = DMA_BIT_MASK(32),
}, },
}; };

View File

@ -54,10 +54,9 @@ unsigned long __xchg_small(volatile void *ptr, unsigned long val, unsigned int s
unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old, unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
unsigned long new, unsigned int size) unsigned long new, unsigned int size)
{ {
u32 mask, old32, new32, load32; u32 mask, old32, new32, load32, load;
volatile u32 *ptr32; volatile u32 *ptr32;
unsigned int shift; unsigned int shift;
u8 load;
/* Check that ptr is naturally aligned */ /* Check that ptr is naturally aligned */
WARN_ON((unsigned long)ptr & (size - 1)); WARN_ON((unsigned long)ptr & (size - 1));

View File

@ -384,7 +384,8 @@ static void __init bootmem_init(void)
init_initrd(); init_initrd();
reserved_end = (unsigned long) PFN_UP(__pa_symbol(&_end)); reserved_end = (unsigned long) PFN_UP(__pa_symbol(&_end));
memblock_reserve(PHYS_OFFSET, reserved_end << PAGE_SHIFT); memblock_reserve(PHYS_OFFSET,
(reserved_end << PAGE_SHIFT) - PHYS_OFFSET);
/* /*
* max_low_pfn is not a number of pages. The number of pages * max_low_pfn is not a number of pages. The number of pages

View File

@ -31,8 +31,8 @@ static int vmmc_probe(struct platform_device *pdev)
dma_addr_t dma; dma_addr_t dma;
cp1_base = cp1_base =
(void *) CPHYSADDR(dma_alloc_coherent(NULL, CP1_SIZE, (void *) CPHYSADDR(dma_alloc_coherent(&pdev->dev, CP1_SIZE,
&dma, GFP_ATOMIC)); &dma, GFP_KERNEL));
gpio_count = of_gpio_count(pdev->dev.of_node); gpio_count = of_gpio_count(pdev->dev.of_node);
while (gpio_count > 0) { while (gpio_count > 0) {

View File

@ -79,8 +79,6 @@ enum reg_val_type {
REG_64BIT_32BIT, REG_64BIT_32BIT,
/* 32-bit compatible, need truncation for 64-bit ops. */ /* 32-bit compatible, need truncation for 64-bit ops. */
REG_32BIT, REG_32BIT,
/* 32-bit zero extended. */
REG_32BIT_ZERO_EX,
/* 32-bit no sign/zero extension needed. */ /* 32-bit no sign/zero extension needed. */
REG_32BIT_POS REG_32BIT_POS
}; };
@ -343,12 +341,15 @@ static int build_int_epilogue(struct jit_ctx *ctx, int dest_reg)
const struct bpf_prog *prog = ctx->skf; const struct bpf_prog *prog = ctx->skf;
int stack_adjust = ctx->stack_size; int stack_adjust = ctx->stack_size;
int store_offset = stack_adjust - 8; int store_offset = stack_adjust - 8;
enum reg_val_type td;
int r0 = MIPS_R_V0; int r0 = MIPS_R_V0;
if (dest_reg == MIPS_R_RA && if (dest_reg == MIPS_R_RA) {
get_reg_val_type(ctx, prog->len, BPF_REG_0) == REG_32BIT_ZERO_EX)
/* Don't let zero extended value escape. */ /* Don't let zero extended value escape. */
td = get_reg_val_type(ctx, prog->len, BPF_REG_0);
if (td == REG_64BIT)
emit_instr(ctx, sll, r0, r0, 0); emit_instr(ctx, sll, r0, r0, 0);
}
if (ctx->flags & EBPF_SAVE_RA) { if (ctx->flags & EBPF_SAVE_RA) {
emit_instr(ctx, ld, MIPS_R_RA, store_offset, MIPS_R_SP); emit_instr(ctx, ld, MIPS_R_RA, store_offset, MIPS_R_SP);
@ -692,7 +693,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (dst < 0) if (dst < 0)
return dst; return dst;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg); td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) { if (td == REG_64BIT) {
/* sign extend */ /* sign extend */
emit_instr(ctx, sll, dst, dst, 0); emit_instr(ctx, sll, dst, dst, 0);
} }
@ -707,7 +708,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (dst < 0) if (dst < 0)
return dst; return dst;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg); td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) { if (td == REG_64BIT) {
/* sign extend */ /* sign extend */
emit_instr(ctx, sll, dst, dst, 0); emit_instr(ctx, sll, dst, dst, 0);
} }
@ -721,7 +722,7 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (dst < 0) if (dst < 0)
return dst; return dst;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg); td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) if (td == REG_64BIT)
/* sign extend */ /* sign extend */
emit_instr(ctx, sll, dst, dst, 0); emit_instr(ctx, sll, dst, dst, 0);
if (insn->imm == 1) { if (insn->imm == 1) {
@ -860,13 +861,13 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
if (src < 0 || dst < 0) if (src < 0 || dst < 0)
return -EINVAL; return -EINVAL;
td = get_reg_val_type(ctx, this_idx, insn->dst_reg); td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) { if (td == REG_64BIT) {
/* sign extend */ /* sign extend */
emit_instr(ctx, sll, dst, dst, 0); emit_instr(ctx, sll, dst, dst, 0);
} }
did_move = false; did_move = false;
ts = get_reg_val_type(ctx, this_idx, insn->src_reg); ts = get_reg_val_type(ctx, this_idx, insn->src_reg);
if (ts == REG_64BIT || ts == REG_32BIT_ZERO_EX) { if (ts == REG_64BIT) {
int tmp_reg = MIPS_R_AT; int tmp_reg = MIPS_R_AT;
if (bpf_op == BPF_MOV) { if (bpf_op == BPF_MOV) {
@ -1254,8 +1255,7 @@ jeq_common:
if (insn->imm == 64 && td == REG_32BIT) if (insn->imm == 64 && td == REG_32BIT)
emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32);
if (insn->imm != 64 && if (insn->imm != 64 && td == REG_64BIT) {
(td == REG_64BIT || td == REG_32BIT_ZERO_EX)) {
/* sign extend */ /* sign extend */
emit_instr(ctx, sll, dst, dst, 0); emit_instr(ctx, sll, dst, dst, 0);
} }
@ -1819,7 +1819,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
/* Update the icache */ /* Update the icache */
flush_icache_range((unsigned long)ctx.target, flush_icache_range((unsigned long)ctx.target,
(unsigned long)(ctx.target + ctx.idx * sizeof(u32))); (unsigned long)&ctx.target[ctx.idx]);
if (bpf_jit_enable > 1) if (bpf_jit_enable > 1)
/* Dump JIT code */ /* Dump JIT code */

View File

@ -308,15 +308,29 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
long do_syscall_trace_enter(struct pt_regs *regs) long do_syscall_trace_enter(struct pt_regs *regs)
{ {
if (test_thread_flag(TIF_SYSCALL_TRACE) && if (test_thread_flag(TIF_SYSCALL_TRACE)) {
tracehook_report_syscall_entry(regs)) { int rc = tracehook_report_syscall_entry(regs);
/* /*
* Tracing decided this syscall should not happen or the * As tracesys_next does not set %r28 to -ENOSYS
* debugger stored an invalid system call number. Skip * when %r20 is set to -1, initialize it here.
* the system call and the system call restart handling. */
regs->gr[28] = -ENOSYS;
if (rc) {
/*
* A nonzero return code from
* tracehook_report_syscall_entry() tells us
* to prevent the syscall execution. Skip
* the syscall call and the syscall restart handling.
*
* Note that the tracer may also just change
* regs->gr[20] to an invalid syscall number,
* that is handled by tracesys_next.
*/ */
regs->gr[20] = -1UL; regs->gr[20] = -1UL;
goto out; return -1;
}
} }
/* Do the secure computing check after ptrace. */ /* Do the secure computing check after ptrace. */
@ -340,7 +354,6 @@ long do_syscall_trace_enter(struct pt_regs *regs)
regs->gr[24] & 0xffffffff, regs->gr[24] & 0xffffffff,
regs->gr[23] & 0xffffffff); regs->gr[23] & 0xffffffff);
out:
/* /*
* Sign extend the syscall number to 64bit since it may have been * Sign extend the syscall number to 64bit since it may have been
* modified by a compat ptrace call * modified by a compat ptrace call

View File

@ -1593,6 +1593,8 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
pnv_pci_ioda2_setup_dma_pe(phb, pe); pnv_pci_ioda2_setup_dma_pe(phb, pe);
#ifdef CONFIG_IOMMU_API #ifdef CONFIG_IOMMU_API
iommu_register_group(&pe->table_group,
pe->phb->hose->global_number, pe->pe_number);
pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL); pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL);
#endif #endif
} }

View File

@ -1147,6 +1147,8 @@ static int pnv_tce_iommu_bus_notifier(struct notifier_block *nb,
return 0; return 0;
pe = &phb->ioda.pe_array[pdn->pe_number]; pe = &phb->ioda.pe_array[pdn->pe_number];
if (!pe->table_group.group)
return 0;
iommu_add_device(&pe->table_group, dev); iommu_add_device(&pe->table_group, dev);
return 0; return 0;
case BUS_NOTIFY_DEL_DEVICE: case BUS_NOTIFY_DEL_DEVICE:

View File

@ -297,7 +297,7 @@ static int shadow_crycb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
scb_s->crycbd = 0; scb_s->crycbd = 0;
apie_h = vcpu->arch.sie_block->eca & ECA_APIE; apie_h = vcpu->arch.sie_block->eca & ECA_APIE;
if (!apie_h && !key_msk) if (!apie_h && (!key_msk || fmt_o == CRYCB_FORMAT0))
return 0; return 0;
if (!crycb_addr) if (!crycb_addr)

View File

@ -1,3 +1,3 @@
ifneq ($(CONFIG_BUILTIN_DTB_SOURCE),"") ifneq ($(CONFIG_BUILTIN_DTB_SOURCE),"")
obj-y += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o obj-$(CONFIG_USE_BUILTIN_DTB) += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o
endif endif

View File

@ -841,7 +841,7 @@ union hv_gpa_page_range {
* count is equal with how many entries of union hv_gpa_page_range can * count is equal with how many entries of union hv_gpa_page_range can
* be populated into the input parameter page. * be populated into the input parameter page.
*/ */
#define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) / \ #define HV_MAX_FLUSH_REP_COUNT ((PAGE_SIZE - 2 * sizeof(u64)) / \
sizeof(union hv_gpa_page_range)) sizeof(union hv_gpa_page_range))
struct hv_guest_mapping_flush_list { struct hv_guest_mapping_flush_list {

View File

@ -299,6 +299,7 @@ union kvm_mmu_extended_role {
unsigned int cr4_smap:1; unsigned int cr4_smap:1;
unsigned int cr4_smep:1; unsigned int cr4_smep:1;
unsigned int cr4_la57:1; unsigned int cr4_la57:1;
unsigned int maxphyaddr:6;
}; };
}; };
@ -397,6 +398,7 @@ struct kvm_mmu {
void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
u64 *spte, const void *pte); u64 *spte, const void *pte);
hpa_t root_hpa; hpa_t root_hpa;
gpa_t root_cr3;
union kvm_mmu_role mmu_role; union kvm_mmu_role mmu_role;
u8 root_level; u8 root_level;
u8 shadow_root_level; u8 shadow_root_level;

View File

@ -284,7 +284,7 @@ do { \
__put_user_goto(x, ptr, "l", "k", "ir", label); \ __put_user_goto(x, ptr, "l", "k", "ir", label); \
break; \ break; \
case 8: \ case 8: \
__put_user_goto_u64((__typeof__(*ptr))(x), ptr, label); \ __put_user_goto_u64(x, ptr, label); \
break; \ break; \
default: \ default: \
__put_user_bad(); \ __put_user_bad(); \
@ -431,8 +431,10 @@ do { \
({ \ ({ \
__label__ __pu_label; \ __label__ __pu_label; \
int __pu_err = -EFAULT; \ int __pu_err = -EFAULT; \
__typeof__(*(ptr)) __pu_val; \
__pu_val = x; \
__uaccess_begin(); \ __uaccess_begin(); \
__put_user_size((x), (ptr), (size), __pu_label); \ __put_user_size(__pu_val, (ptr), (size), __pu_label); \
__pu_err = 0; \ __pu_err = 0; \
__pu_label: \ __pu_label: \
__uaccess_end(); \ __uaccess_end(); \

View File

@ -335,6 +335,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0; unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0; unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0;
unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0; unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0;
unsigned f_la57 = 0;
/* cpuid 1.edx */ /* cpuid 1.edx */
const u32 kvm_cpuid_1_edx_x86_features = const u32 kvm_cpuid_1_edx_x86_features =
@ -489,7 +490,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
// TSC_ADJUST is emulated // TSC_ADJUST is emulated
entry->ebx |= F(TSC_ADJUST); entry->ebx |= F(TSC_ADJUST);
entry->ecx &= kvm_cpuid_7_0_ecx_x86_features; entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;
f_la57 = entry->ecx & F(LA57);
cpuid_mask(&entry->ecx, CPUID_7_ECX); cpuid_mask(&entry->ecx, CPUID_7_ECX);
/* Set LA57 based on hardware capability. */
entry->ecx |= f_la57;
entry->ecx |= f_umip; entry->ecx |= f_umip;
/* PKU is not yet implemented for shadow paging. */ /* PKU is not yet implemented for shadow paging. */
if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE)) if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))

View File

@ -3555,6 +3555,7 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
&invalid_list); &invalid_list);
mmu->root_hpa = INVALID_PAGE; mmu->root_hpa = INVALID_PAGE;
} }
mmu->root_cr3 = 0;
} }
kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
@ -3610,6 +3611,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root); vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root);
} else } else
BUG(); BUG();
vcpu->arch.mmu->root_cr3 = vcpu->arch.mmu->get_cr3(vcpu);
return 0; return 0;
} }
@ -3618,10 +3620,11 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
{ {
struct kvm_mmu_page *sp; struct kvm_mmu_page *sp;
u64 pdptr, pm_mask; u64 pdptr, pm_mask;
gfn_t root_gfn; gfn_t root_gfn, root_cr3;
int i; int i;
root_gfn = vcpu->arch.mmu->get_cr3(vcpu) >> PAGE_SHIFT; root_cr3 = vcpu->arch.mmu->get_cr3(vcpu);
root_gfn = root_cr3 >> PAGE_SHIFT;
if (mmu_check_root(vcpu, root_gfn)) if (mmu_check_root(vcpu, root_gfn))
return 1; return 1;
@ -3646,7 +3649,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
++sp->root_count; ++sp->root_count;
spin_unlock(&vcpu->kvm->mmu_lock); spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu->root_hpa = root; vcpu->arch.mmu->root_hpa = root;
return 0; goto set_root_cr3;
} }
/* /*
@ -3712,6 +3715,9 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root); vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root);
} }
set_root_cr3:
vcpu->arch.mmu->root_cr3 = root_cr3;
return 0; return 0;
} }
@ -4163,7 +4169,7 @@ static bool cached_root_available(struct kvm_vcpu *vcpu, gpa_t new_cr3,
struct kvm_mmu_root_info root; struct kvm_mmu_root_info root;
struct kvm_mmu *mmu = vcpu->arch.mmu; struct kvm_mmu *mmu = vcpu->arch.mmu;
root.cr3 = mmu->get_cr3(vcpu); root.cr3 = mmu->root_cr3;
root.hpa = mmu->root_hpa; root.hpa = mmu->root_hpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) {
@ -4176,6 +4182,7 @@ static bool cached_root_available(struct kvm_vcpu *vcpu, gpa_t new_cr3,
} }
mmu->root_hpa = root.hpa; mmu->root_hpa = root.hpa;
mmu->root_cr3 = root.cr3;
return i < KVM_MMU_NUM_PREV_ROOTS; return i < KVM_MMU_NUM_PREV_ROOTS;
} }
@ -4770,6 +4777,7 @@ static union kvm_mmu_extended_role kvm_calc_mmu_role_ext(struct kvm_vcpu *vcpu)
ext.cr4_pse = !!is_pse(vcpu); ext.cr4_pse = !!is_pse(vcpu);
ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE); ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE);
ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57); ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57);
ext.maxphyaddr = cpuid_maxphyaddr(vcpu);
ext.valid = 1; ext.valid = 1;
@ -5516,11 +5524,13 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu)
vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
vcpu->arch.root_mmu.root_hpa = INVALID_PAGE; vcpu->arch.root_mmu.root_hpa = INVALID_PAGE;
vcpu->arch.root_mmu.root_cr3 = 0;
vcpu->arch.root_mmu.translate_gpa = translate_gpa; vcpu->arch.root_mmu.translate_gpa = translate_gpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID; vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
vcpu->arch.guest_mmu.root_hpa = INVALID_PAGE; vcpu->arch.guest_mmu.root_hpa = INVALID_PAGE;
vcpu->arch.guest_mmu.root_cr3 = 0;
vcpu->arch.guest_mmu.translate_gpa = translate_gpa; vcpu->arch.guest_mmu.translate_gpa = translate_gpa;
for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID; vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;

View File

@ -117,67 +117,11 @@ __visible bool ex_handler_fprestore(const struct exception_table_entry *fixup,
} }
EXPORT_SYMBOL_GPL(ex_handler_fprestore); EXPORT_SYMBOL_GPL(ex_handler_fprestore);
/* Helper to check whether a uaccess fault indicates a kernel bug. */
static bool bogus_uaccess(struct pt_regs *regs, int trapnr,
unsigned long fault_addr)
{
/* This is the normal case: #PF with a fault address in userspace. */
if (trapnr == X86_TRAP_PF && fault_addr < TASK_SIZE_MAX)
return false;
/*
* This code can be reached for machine checks, but only if the #MC
* handler has already decided that it looks like a candidate for fixup.
* This e.g. happens when attempting to access userspace memory which
* the CPU can't access because of uncorrectable bad memory.
*/
if (trapnr == X86_TRAP_MC)
return false;
/*
* There are two remaining exception types we might encounter here:
* - #PF for faulting accesses to kernel addresses
* - #GP for faulting accesses to noncanonical addresses
* Complain about anything else.
*/
if (trapnr != X86_TRAP_PF && trapnr != X86_TRAP_GP) {
WARN(1, "unexpected trap %d in uaccess\n", trapnr);
return false;
}
/*
* This is a faulting memory access in kernel space, on a kernel
* address, in a usercopy function. This can e.g. be caused by improper
* use of helpers like __put_user and by improper attempts to access
* userspace addresses in KERNEL_DS regions.
* The one (semi-)legitimate exception are probe_kernel_{read,write}(),
* which can be invoked from places like kgdb, /dev/mem (for reading)
* and privileged BPF code (for reading).
* The probe_kernel_*() functions set the kernel_uaccess_faults_ok flag
* to tell us that faulting on kernel addresses, and even noncanonical
* addresses, in a userspace accessor does not necessarily imply a
* kernel bug, root might just be doing weird stuff.
*/
if (current->kernel_uaccess_faults_ok)
return false;
/* This is bad. Refuse the fixup so that we go into die(). */
if (trapnr == X86_TRAP_PF) {
pr_emerg("BUG: pagefault on kernel address 0x%lx in non-whitelisted uaccess\n",
fault_addr);
} else {
pr_emerg("BUG: GPF in non-whitelisted uaccess (non-canonical address?)\n");
}
return true;
}
__visible bool ex_handler_uaccess(const struct exception_table_entry *fixup, __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup,
struct pt_regs *regs, int trapnr, struct pt_regs *regs, int trapnr,
unsigned long error_code, unsigned long error_code,
unsigned long fault_addr) unsigned long fault_addr)
{ {
if (bogus_uaccess(regs, trapnr, fault_addr))
return false;
regs->ip = ex_fixup_addr(fixup); regs->ip = ex_fixup_addr(fixup);
return true; return true;
} }
@ -188,8 +132,6 @@ __visible bool ex_handler_ext(const struct exception_table_entry *fixup,
unsigned long error_code, unsigned long error_code,
unsigned long fault_addr) unsigned long fault_addr)
{ {
if (bogus_uaccess(regs, trapnr, fault_addr))
return false;
/* Special hack for uaccess_err */ /* Special hack for uaccess_err */
current->thread.uaccess_err = 1; current->thread.uaccess_err = 1;
regs->ip = ex_fixup_addr(fixup); regs->ip = ex_fixup_addr(fixup);

View File

@ -122,8 +122,10 @@ static void alg_do_release(const struct af_alg_type *type, void *private)
int af_alg_release(struct socket *sock) int af_alg_release(struct socket *sock)
{ {
if (sock->sk) if (sock->sk) {
sock_put(sock->sk); sock_put(sock->sk);
sock->sk = NULL;
}
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(af_alg_release); EXPORT_SYMBOL_GPL(af_alg_release);

View File

@ -95,7 +95,7 @@ static void __update_runtime_status(struct device *dev, enum rpm_status status)
static void pm_runtime_deactivate_timer(struct device *dev) static void pm_runtime_deactivate_timer(struct device *dev)
{ {
if (dev->power.timer_expires > 0) { if (dev->power.timer_expires > 0) {
hrtimer_cancel(&dev->power.suspend_timer); hrtimer_try_to_cancel(&dev->power.suspend_timer);
dev->power.timer_expires = 0; dev->power.timer_expires = 0;
} }
} }

View File

@ -144,8 +144,7 @@ static void __init at91sam9x5_pmc_setup(struct device_node *np,
return; return;
at91sam9x5_pmc = pmc_data_allocate(PMC_MAIN + 1, at91sam9x5_pmc = pmc_data_allocate(PMC_MAIN + 1,
nck(at91sam9x5_systemck), nck(at91sam9x5_systemck), 31, 0);
nck(at91sam9x35_periphck), 0);
if (!at91sam9x5_pmc) if (!at91sam9x5_pmc)
return; return;
@ -210,7 +209,7 @@ static void __init at91sam9x5_pmc_setup(struct device_node *np,
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
for (i = 0; i < 2; i++) { for (i = 0; i < 2; i++) {
char name[6]; char name[6];

View File

@ -240,7 +240,7 @@ static void __init sama5d2_pmc_setup(struct device_node *np)
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
char name[6]; char name[6];
@ -291,7 +291,7 @@ static void __init sama5d2_pmc_setup(struct device_node *np)
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
parent_names[5] = "audiopll_pmcck"; parent_names[5] = "audiopll_pmcck";
for (i = 0; i < ARRAY_SIZE(sama5d2_gck); i++) { for (i = 0; i < ARRAY_SIZE(sama5d2_gck); i++) {
hw = at91_clk_register_generated(regmap, &pmc_pcr_lock, hw = at91_clk_register_generated(regmap, &pmc_pcr_lock,

View File

@ -207,7 +207,7 @@ static void __init sama5d4_pmc_setup(struct device_node *np)
parent_names[1] = "mainck"; parent_names[1] = "mainck";
parent_names[2] = "plladivck"; parent_names[2] = "plladivck";
parent_names[3] = "utmick"; parent_names[3] = "utmick";
parent_names[4] = "mck"; parent_names[4] = "masterck";
for (i = 0; i < 3; i++) { for (i = 0; i < 3; i++) {
char name[6]; char name[6];

View File

@ -264,9 +264,9 @@ static SUNXI_CCU_GATE(ahb1_mmc1_clk, "ahb1-mmc1", "ahb1",
static SUNXI_CCU_GATE(ahb1_mmc2_clk, "ahb1-mmc2", "ahb1", static SUNXI_CCU_GATE(ahb1_mmc2_clk, "ahb1-mmc2", "ahb1",
0x060, BIT(10), 0); 0x060, BIT(10), 0);
static SUNXI_CCU_GATE(ahb1_mmc3_clk, "ahb1-mmc3", "ahb1", static SUNXI_CCU_GATE(ahb1_mmc3_clk, "ahb1-mmc3", "ahb1",
0x060, BIT(12), 0); 0x060, BIT(11), 0);
static SUNXI_CCU_GATE(ahb1_nand1_clk, "ahb1-nand1", "ahb1", static SUNXI_CCU_GATE(ahb1_nand1_clk, "ahb1-nand1", "ahb1",
0x060, BIT(13), 0); 0x060, BIT(12), 0);
static SUNXI_CCU_GATE(ahb1_nand0_clk, "ahb1-nand0", "ahb1", static SUNXI_CCU_GATE(ahb1_nand0_clk, "ahb1-nand0", "ahb1",
0x060, BIT(13), 0); 0x060, BIT(13), 0);
static SUNXI_CCU_GATE(ahb1_sdram_clk, "ahb1-sdram", "ahb1", static SUNXI_CCU_GATE(ahb1_sdram_clk, "ahb1-sdram", "ahb1",

View File

@ -542,7 +542,7 @@ static struct ccu_reset_map sun8i_v3s_ccu_resets[] = {
[RST_BUS_OHCI0] = { 0x2c0, BIT(29) }, [RST_BUS_OHCI0] = { 0x2c0, BIT(29) },
[RST_BUS_VE] = { 0x2c4, BIT(0) }, [RST_BUS_VE] = { 0x2c4, BIT(0) },
[RST_BUS_TCON0] = { 0x2c4, BIT(3) }, [RST_BUS_TCON0] = { 0x2c4, BIT(4) },
[RST_BUS_CSI] = { 0x2c4, BIT(8) }, [RST_BUS_CSI] = { 0x2c4, BIT(8) },
[RST_BUS_DE] = { 0x2c4, BIT(12) }, [RST_BUS_DE] = { 0x2c4, BIT(12) },
[RST_BUS_DBG] = { 0x2c4, BIT(31) }, [RST_BUS_DBG] = { 0x2c4, BIT(31) },

View File

@ -187,8 +187,8 @@ static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
cpufreq_cooling_unregister(priv->cdev); cpufreq_cooling_unregister(priv->cdev);
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
kfree(priv);
dev_pm_opp_remove_all_dynamic(priv->cpu_dev); dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
kfree(priv);
return 0; return 0;
} }

View File

@ -30,7 +30,7 @@ static inline int cc_pm_init(struct cc_drvdata *drvdata)
return 0; return 0;
} }
static void cc_pm_go(struct cc_drvdata *drvdata) {} static inline void cc_pm_go(struct cc_drvdata *drvdata) {}
static inline void cc_pm_fini(struct cc_drvdata *drvdata) {} static inline void cc_pm_fini(struct cc_drvdata *drvdata) {}

View File

@ -30,6 +30,7 @@
#define GPIO_REG_EDGE 0xA0 #define GPIO_REG_EDGE 0xA0
struct mtk_gc { struct mtk_gc {
struct irq_chip irq_chip;
struct gpio_chip chip; struct gpio_chip chip;
spinlock_t lock; spinlock_t lock;
int bank; int bank;
@ -189,13 +190,6 @@ mediatek_gpio_irq_type(struct irq_data *d, unsigned int type)
return 0; return 0;
} }
static struct irq_chip mediatek_gpio_irq_chip = {
.irq_unmask = mediatek_gpio_irq_unmask,
.irq_mask = mediatek_gpio_irq_mask,
.irq_mask_ack = mediatek_gpio_irq_mask,
.irq_set_type = mediatek_gpio_irq_type,
};
static int static int
mediatek_gpio_xlate(struct gpio_chip *chip, mediatek_gpio_xlate(struct gpio_chip *chip,
const struct of_phandle_args *spec, u32 *flags) const struct of_phandle_args *spec, u32 *flags)
@ -254,6 +248,13 @@ mediatek_gpio_bank_probe(struct device *dev,
return ret; return ret;
} }
rg->irq_chip.name = dev_name(dev);
rg->irq_chip.parent_device = dev;
rg->irq_chip.irq_unmask = mediatek_gpio_irq_unmask;
rg->irq_chip.irq_mask = mediatek_gpio_irq_mask;
rg->irq_chip.irq_mask_ack = mediatek_gpio_irq_mask;
rg->irq_chip.irq_set_type = mediatek_gpio_irq_type;
if (mtk->gpio_irq) { if (mtk->gpio_irq) {
/* /*
* Manually request the irq here instead of passing * Manually request the irq here instead of passing
@ -270,14 +271,14 @@ mediatek_gpio_bank_probe(struct device *dev,
return ret; return ret;
} }
ret = gpiochip_irqchip_add(&rg->chip, &mediatek_gpio_irq_chip, ret = gpiochip_irqchip_add(&rg->chip, &rg->irq_chip,
0, handle_simple_irq, IRQ_TYPE_NONE); 0, handle_simple_irq, IRQ_TYPE_NONE);
if (ret) { if (ret) {
dev_err(dev, "failed to add gpiochip_irqchip\n"); dev_err(dev, "failed to add gpiochip_irqchip\n");
return ret; return ret;
} }
gpiochip_set_chained_irqchip(&rg->chip, &mediatek_gpio_irq_chip, gpiochip_set_chained_irqchip(&rg->chip, &rg->irq_chip,
mtk->gpio_irq, NULL); mtk->gpio_irq, NULL);
} }
@ -310,7 +311,6 @@ mediatek_gpio_probe(struct platform_device *pdev)
mtk->gpio_irq = irq_of_parse_and_map(np, 0); mtk->gpio_irq = irq_of_parse_and_map(np, 0);
mtk->dev = dev; mtk->dev = dev;
platform_set_drvdata(pdev, mtk); platform_set_drvdata(pdev, mtk);
mediatek_gpio_irq_chip.name = dev_name(dev);
for (i = 0; i < MTK_BANK_CNT; i++) { for (i = 0; i < MTK_BANK_CNT; i++) {
ret = mediatek_gpio_bank_probe(dev, np, i); ret = mediatek_gpio_bank_probe(dev, np, i);

View File

@ -245,6 +245,7 @@ static bool pxa_gpio_has_pinctrl(void)
{ {
switch (gpio_type) { switch (gpio_type) {
case PXA3XX_GPIO: case PXA3XX_GPIO:
case MMP2_GPIO:
return false; return false;
default: default:

View File

@ -212,6 +212,7 @@ int amdgpu_driver_load_kms(struct drm_device *dev, unsigned long flags)
} }
if (amdgpu_device_is_px(dev)) { if (amdgpu_device_is_px(dev)) {
dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP);
pm_runtime_use_autosuspend(dev->dev); pm_runtime_use_autosuspend(dev->dev);
pm_runtime_set_autosuspend_delay(dev->dev, 5000); pm_runtime_set_autosuspend_delay(dev->dev, 5000);
pm_runtime_set_active(dev->dev); pm_runtime_set_active(dev->dev);

View File

@ -406,6 +406,7 @@ struct amdgpu_crtc {
struct amdgpu_flip_work *pflip_works; struct amdgpu_flip_work *pflip_works;
enum amdgpu_flip_status pflip_status; enum amdgpu_flip_status pflip_status;
int deferred_flip_completion; int deferred_flip_completion;
u64 last_flip_vblank;
/* pll sharing */ /* pll sharing */
struct amdgpu_atom_ss ss; struct amdgpu_atom_ss ss;
bool ss_enabled; bool ss_enabled;

View File

@ -652,12 +652,14 @@ void amdgpu_vm_move_to_lru_tail(struct amdgpu_device *adev,
struct ttm_bo_global *glob = adev->mman.bdev.glob; struct ttm_bo_global *glob = adev->mman.bdev.glob;
struct amdgpu_vm_bo_base *bo_base; struct amdgpu_vm_bo_base *bo_base;
#if 0
if (vm->bulk_moveable) { if (vm->bulk_moveable) {
spin_lock(&glob->lru_lock); spin_lock(&glob->lru_lock);
ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move); ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move);
spin_unlock(&glob->lru_lock); spin_unlock(&glob->lru_lock);
return; return;
} }
#endif
memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move)); memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move));

View File

@ -128,7 +128,7 @@ static const struct soc15_reg_golden golden_settings_sdma0_4_2_init[] = {
static const struct soc15_reg_golden golden_settings_sdma0_4_2[] = static const struct soc15_reg_golden golden_settings_sdma0_4_2[] =
{ {
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831d07), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CLK_CTRL, 0xffffffff, 0x3f000100), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CLK_CTRL, 0xffffffff, 0x3f000100),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0000773f, 0x00004002),
SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002),
@ -158,7 +158,7 @@ static const struct soc15_reg_golden golden_settings_sdma0_4_2[] =
}; };
static const struct soc15_reg_golden golden_settings_sdma1_4_2[] = { static const struct soc15_reg_golden golden_settings_sdma1_4_2[] = {
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0000773f, 0x00004002),
SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002), SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002),

View File

@ -303,12 +303,11 @@ static void dm_pflip_high_irq(void *interrupt_params)
return; return;
} }
/* Update to correct count(s) if racing with vblank irq */
amdgpu_crtc->last_flip_vblank = drm_crtc_accurate_vblank_count(&amdgpu_crtc->base);
/* wake up userspace */ /* wake up userspace */
if (amdgpu_crtc->event) { if (amdgpu_crtc->event) {
/* Update to correct count(s) if racing with vblank irq */
drm_crtc_accurate_vblank_count(&amdgpu_crtc->base);
drm_crtc_send_vblank_event(&amdgpu_crtc->base, amdgpu_crtc->event); drm_crtc_send_vblank_event(&amdgpu_crtc->base, amdgpu_crtc->event);
/* page flip completed. clean up */ /* page flip completed. clean up */
@ -786,12 +785,13 @@ static int dm_suspend(void *handle)
struct amdgpu_display_manager *dm = &adev->dm; struct amdgpu_display_manager *dm = &adev->dm;
int ret = 0; int ret = 0;
WARN_ON(adev->dm.cached_state);
adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev);
s3_handle_mst(adev->ddev, true); s3_handle_mst(adev->ddev, true);
amdgpu_dm_irq_suspend(adev); amdgpu_dm_irq_suspend(adev);
WARN_ON(adev->dm.cached_state);
adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev);
dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3); dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);

View File

@ -696,6 +696,11 @@ static void dce11_update_clocks(struct clk_mgr *clk_mgr,
{ {
struct dce_clk_mgr *clk_mgr_dce = TO_DCE_CLK_MGR(clk_mgr); struct dce_clk_mgr *clk_mgr_dce = TO_DCE_CLK_MGR(clk_mgr);
struct dm_pp_power_level_change_request level_change_req; struct dm_pp_power_level_change_request level_change_req;
int patched_disp_clk = context->bw.dce.dispclk_khz;
/*TODO: W/A for dal3 linux, investigate why this works */
if (!clk_mgr_dce->dfs_bypass_active)
patched_disp_clk = patched_disp_clk * 115 / 100;
level_change_req.power_level = dce_get_required_clocks_state(clk_mgr, context); level_change_req.power_level = dce_get_required_clocks_state(clk_mgr, context);
/* get max clock state from PPLIB */ /* get max clock state from PPLIB */
@ -705,9 +710,9 @@ static void dce11_update_clocks(struct clk_mgr *clk_mgr,
clk_mgr_dce->cur_min_clks_state = level_change_req.power_level; clk_mgr_dce->cur_min_clks_state = level_change_req.power_level;
} }
if (should_set_clock(safe_to_lower, context->bw.dce.dispclk_khz, clk_mgr->clks.dispclk_khz)) { if (should_set_clock(safe_to_lower, patched_disp_clk, clk_mgr->clks.dispclk_khz)) {
context->bw.dce.dispclk_khz = dce_set_clock(clk_mgr, context->bw.dce.dispclk_khz); context->bw.dce.dispclk_khz = dce_set_clock(clk_mgr, patched_disp_clk);
clk_mgr->clks.dispclk_khz = context->bw.dce.dispclk_khz; clk_mgr->clks.dispclk_khz = patched_disp_clk;
} }
dce11_pplib_apply_display_requirements(clk_mgr->ctx->dc, context); dce11_pplib_apply_display_requirements(clk_mgr->ctx->dc, context);
} }

View File

@ -37,6 +37,10 @@ void dce100_prepare_bandwidth(
struct dc *dc, struct dc *dc,
struct dc_state *context); struct dc_state *context);
void dce100_optimize_bandwidth(
struct dc *dc,
struct dc_state *context);
bool dce100_enable_display_power_gating(struct dc *dc, uint8_t controller_id, bool dce100_enable_display_power_gating(struct dc *dc, uint8_t controller_id,
struct dc_bios *dcb, struct dc_bios *dcb,
enum pipe_gating_control power_gating); enum pipe_gating_control power_gating);

View File

@ -77,6 +77,6 @@ void dce80_hw_sequencer_construct(struct dc *dc)
dc->hwss.enable_display_power_gating = dce100_enable_display_power_gating; dc->hwss.enable_display_power_gating = dce100_enable_display_power_gating;
dc->hwss.pipe_control_lock = dce_pipe_control_lock; dc->hwss.pipe_control_lock = dce_pipe_control_lock;
dc->hwss.prepare_bandwidth = dce100_prepare_bandwidth; dc->hwss.prepare_bandwidth = dce100_prepare_bandwidth;
dc->hwss.optimize_bandwidth = dce100_prepare_bandwidth; dc->hwss.optimize_bandwidth = dce100_optimize_bandwidth;
} }

View File

@ -792,9 +792,22 @@ bool dce80_validate_bandwidth(
struct dc *dc, struct dc *dc,
struct dc_state *context) struct dc_state *context)
{ {
int i;
bool at_least_one_pipe = false;
for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (context->res_ctx.pipe_ctx[i].stream)
at_least_one_pipe = true;
}
if (at_least_one_pipe) {
/* TODO implement when needed but for now hardcode max value*/ /* TODO implement when needed but for now hardcode max value*/
context->bw.dce.dispclk_khz = 681000; context->bw.dce.dispclk_khz = 681000;
context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ; context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ;
} else {
context->bw.dce.dispclk_khz = 0;
context->bw.dce.yclk_khz = 0;
}
return true; return true;
} }

View File

@ -2693,8 +2693,8 @@ static void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
.mirror = pipe_ctx->plane_state->horizontal_mirror .mirror = pipe_ctx->plane_state->horizontal_mirror
}; };
pos_cpy.x -= pipe_ctx->plane_state->dst_rect.x; pos_cpy.x_hotspot += pipe_ctx->plane_state->dst_rect.x;
pos_cpy.y -= pipe_ctx->plane_state->dst_rect.y; pos_cpy.y_hotspot += pipe_ctx->plane_state->dst_rect.y;
if (pipe_ctx->plane_state->address.type if (pipe_ctx->plane_state->address.type
== PLN_ADDR_TYPE_VIDEO_PROGRESSIVE) == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)

View File

@ -145,6 +145,10 @@ static int bochs_pci_probe(struct pci_dev *pdev,
if (IS_ERR(dev)) if (IS_ERR(dev))
return PTR_ERR(dev); return PTR_ERR(dev);
ret = pci_enable_device(pdev);
if (ret)
goto err_free_dev;
dev->pdev = pdev; dev->pdev = pdev;
pci_set_drvdata(pdev, dev); pci_set_drvdata(pdev, dev);

View File

@ -1608,6 +1608,15 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
old_plane_state->crtc != new_plane_state->crtc) old_plane_state->crtc != new_plane_state->crtc)
return -EINVAL; return -EINVAL;
/*
* FIXME: Since prepare_fb and cleanup_fb are always called on
* the new_plane_state for async updates we need to block framebuffer
* changes. This prevents use of a fb that's been cleaned up and
* double cleanups from occuring.
*/
if (old_plane_state->fb != new_plane_state->fb)
return -EINVAL;
funcs = plane->helper_private; funcs = plane->helper_private;
if (!funcs->atomic_async_update) if (!funcs->atomic_async_update)
return -EINVAL; return -EINVAL;

View File

@ -338,8 +338,8 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
bool *enabled, int width, int height) bool *enabled, int width, int height)
{ {
struct drm_i915_private *dev_priv = to_i915(fb_helper->dev); struct drm_i915_private *dev_priv = to_i915(fb_helper->dev);
unsigned long conn_configured, conn_seq, mask;
unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG); unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG);
unsigned long conn_configured, conn_seq;
int i, j; int i, j;
bool *save_enabled; bool *save_enabled;
bool fallback = true, ret = true; bool fallback = true, ret = true;
@ -357,10 +357,9 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
drm_modeset_backoff(&ctx); drm_modeset_backoff(&ctx);
memcpy(save_enabled, enabled, count); memcpy(save_enabled, enabled, count);
mask = GENMASK(count - 1, 0); conn_seq = GENMASK(count - 1, 0);
conn_configured = 0; conn_configured = 0;
retry: retry:
conn_seq = conn_configured;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
struct drm_fb_helper_connector *fb_conn; struct drm_fb_helper_connector *fb_conn;
struct drm_connector *connector; struct drm_connector *connector;
@ -373,7 +372,8 @@ retry:
if (conn_configured & BIT(i)) if (conn_configured & BIT(i))
continue; continue;
if (conn_seq == 0 && !connector->has_tile) /* First pass, only consider tiled connectors */
if (conn_seq == GENMASK(count - 1, 0) && !connector->has_tile)
continue; continue;
if (connector->status == connector_status_connected) if (connector->status == connector_status_connected)
@ -477,8 +477,10 @@ retry:
conn_configured |= BIT(i); conn_configured |= BIT(i);
} }
if ((conn_configured & mask) != mask && conn_configured != conn_seq) if (conn_configured != conn_seq) { /* repeat until no more are found */
conn_seq = conn_configured;
goto retry; goto retry;
}
/* /*
* If the BIOS didn't enable everything it could, fall back to have the * If the BIOS didn't enable everything it could, fall back to have the

View File

@ -172,6 +172,7 @@ int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags)
} }
if (radeon_is_px(dev)) { if (radeon_is_px(dev)) {
dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP);
pm_runtime_use_autosuspend(dev->dev); pm_runtime_use_autosuspend(dev->dev);
pm_runtime_set_autosuspend_delay(dev->dev, 5000); pm_runtime_set_autosuspend_delay(dev->dev, 5000);
pm_runtime_set_active(dev->dev); pm_runtime_set_active(dev->dev);

View File

@ -783,6 +783,7 @@ void c4iw_init_dev_ucontext(struct c4iw_rdev *rdev,
static int c4iw_rdev_open(struct c4iw_rdev *rdev) static int c4iw_rdev_open(struct c4iw_rdev *rdev)
{ {
int err; int err;
unsigned int factor;
c4iw_init_dev_ucontext(rdev, &rdev->uctx); c4iw_init_dev_ucontext(rdev, &rdev->uctx);
@ -806,8 +807,18 @@ static int c4iw_rdev_open(struct c4iw_rdev *rdev)
return -EINVAL; return -EINVAL;
} }
rdev->qpmask = rdev->lldi.udb_density - 1; /* This implementation requires a sge_host_page_size <= PAGE_SIZE. */
rdev->cqmask = rdev->lldi.ucq_density - 1; if (rdev->lldi.sge_host_page_size > PAGE_SIZE) {
pr_err("%s: unsupported sge host page size %u\n",
pci_name(rdev->lldi.pdev),
rdev->lldi.sge_host_page_size);
return -EINVAL;
}
factor = PAGE_SIZE / rdev->lldi.sge_host_page_size;
rdev->qpmask = (rdev->lldi.udb_density * factor) - 1;
rdev->cqmask = (rdev->lldi.ucq_density * factor) - 1;
pr_debug("dev %s stag start 0x%0x size 0x%0x num stags %d pbl start 0x%0x size 0x%0x rq start 0x%0x size 0x%0x qp qid start %u size %u cq qid start %u size %u srq size %u\n", pr_debug("dev %s stag start 0x%0x size 0x%0x num stags %d pbl start 0x%0x size 0x%0x rq start 0x%0x size 0x%0x qp qid start %u size %u cq qid start %u size %u srq size %u\n",
pci_name(rdev->lldi.pdev), rdev->lldi.vr->stag.start, pci_name(rdev->lldi.pdev), rdev->lldi.vr->stag.start,
rdev->lldi.vr->stag.size, c4iw_num_stags(rdev), rdev->lldi.vr->stag.size, c4iw_num_stags(rdev),

View File

@ -3032,7 +3032,6 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
{ {
struct srp_target_port *target = host_to_target(scmnd->device->host); struct srp_target_port *target = host_to_target(scmnd->device->host);
struct srp_rdma_ch *ch; struct srp_rdma_ch *ch;
int i, j;
u8 status; u8 status;
shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n"); shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
@ -3044,15 +3043,6 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
if (status) if (status)
return FAILED; return FAILED;
for (i = 0; i < target->ch_count; i++) {
ch = &target->ch[i];
for (j = 0; j < target->req_ring_size; ++j) {
struct srp_request *req = &ch->req_ring[j];
srp_finish_req(ch, req, scmnd->device, DID_RESET << 16);
}
}
return SUCCESS; return SUCCESS;
} }

View File

@ -144,7 +144,7 @@ dmar_alloc_pci_notify_info(struct pci_dev *dev, unsigned long event)
for (tmp = dev; tmp; tmp = tmp->bus->self) for (tmp = dev; tmp; tmp = tmp->bus->self)
level++; level++;
size = sizeof(*info) + level * sizeof(struct acpi_dmar_pci_path); size = sizeof(*info) + level * sizeof(info->path[0]);
if (size <= sizeof(dmar_pci_notify_info_buf)) { if (size <= sizeof(dmar_pci_notify_info_buf)) {
info = (struct dmar_pci_notify_info *)dmar_pci_notify_info_buf; info = (struct dmar_pci_notify_info *)dmar_pci_notify_info_buf;
} else { } else {

View File

@ -1396,9 +1396,9 @@ static void flexrm_shutdown(struct mbox_chan *chan)
/* Clear ring flush state */ /* Clear ring flush state */
timeout = 1000; /* timeout of 1s */ timeout = 1000; /* timeout of 1s */
writel_relaxed(0x0, ring + RING_CONTROL); writel_relaxed(0x0, ring->regs + RING_CONTROL);
do { do {
if (!(readl_relaxed(ring + RING_FLUSH_DONE) & if (!(readl_relaxed(ring->regs + RING_FLUSH_DONE) &
FLUSH_DONE_MASK)) FLUSH_DONE_MASK))
break; break;
mdelay(1); mdelay(1);

View File

@ -310,6 +310,7 @@ int mbox_flush(struct mbox_chan *chan, unsigned long timeout)
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(mbox_flush);
/** /**
* mbox_request_channel - Request a mailbox channel. * mbox_request_channel - Request a mailbox channel.

View File

@ -2380,12 +2380,6 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
snprintf(md->disk->disk_name, sizeof(md->disk->disk_name), snprintf(md->disk->disk_name, sizeof(md->disk->disk_name),
"mmcblk%u%s", card->host->index, subname ? subname : ""); "mmcblk%u%s", card->host->index, subname ? subname : "");
if (mmc_card_mmc(card))
blk_queue_logical_block_size(md->queue.queue,
card->ext_csd.data_sector_size);
else
blk_queue_logical_block_size(md->queue.queue, 512);
set_capacity(md->disk, size); set_capacity(md->disk, size);
if (mmc_host_cmd23(card->host)) { if (mmc_host_cmd23(card->host)) {

View File

@ -95,7 +95,7 @@ static void mmc_should_fail_request(struct mmc_host *host,
if (!data) if (!data)
return; return;
if (cmd->error || data->error || if ((cmd && cmd->error) || data->error ||
!should_fail(&host->fail_mmc_request, data->blksz * data->blocks)) !should_fail(&host->fail_mmc_request, data->blksz * data->blocks))
return; return;

View File

@ -355,6 +355,7 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
{ {
struct mmc_host *host = card->host; struct mmc_host *host = card->host;
u64 limit = BLK_BOUNCE_HIGH; u64 limit = BLK_BOUNCE_HIGH;
unsigned block_size = 512;
if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
@ -368,7 +369,13 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
blk_queue_max_hw_sectors(mq->queue, blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512)); min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs); blk_queue_max_segments(mq->queue, host->max_segs);
blk_queue_max_segment_size(mq->queue, host->max_seg_size);
if (mmc_card_mmc(card))
block_size = card->ext_csd.data_sector_size;
blk_queue_logical_block_size(mq->queue, block_size);
blk_queue_max_segment_size(mq->queue,
round_down(host->max_seg_size, block_size));
INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler); INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler);
INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work); INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work);

View File

@ -201,7 +201,7 @@ static int cqhci_host_alloc_tdl(struct cqhci_host *cq_host)
cq_host->desc_size = cq_host->slot_sz * cq_host->num_slots; cq_host->desc_size = cq_host->slot_sz * cq_host->num_slots;
cq_host->data_size = cq_host->trans_desc_len * cq_host->mmc->max_segs * cq_host->data_size = cq_host->trans_desc_len * cq_host->mmc->max_segs *
(cq_host->num_slots - 1); cq_host->mmc->cqe_qdepth;
pr_debug("%s: cqhci: desc_size: %zu data_sz: %zu slot-sz: %d\n", pr_debug("%s: cqhci: desc_size: %zu data_sz: %zu slot-sz: %d\n",
mmc_hostname(cq_host->mmc), cq_host->desc_size, cq_host->data_size, mmc_hostname(cq_host->mmc), cq_host->desc_size, cq_host->data_size,
@ -217,12 +217,21 @@ static int cqhci_host_alloc_tdl(struct cqhci_host *cq_host)
cq_host->desc_size, cq_host->desc_size,
&cq_host->desc_dma_base, &cq_host->desc_dma_base,
GFP_KERNEL); GFP_KERNEL);
if (!cq_host->desc_base)
return -ENOMEM;
cq_host->trans_desc_base = dmam_alloc_coherent(mmc_dev(cq_host->mmc), cq_host->trans_desc_base = dmam_alloc_coherent(mmc_dev(cq_host->mmc),
cq_host->data_size, cq_host->data_size,
&cq_host->trans_desc_dma_base, &cq_host->trans_desc_dma_base,
GFP_KERNEL); GFP_KERNEL);
if (!cq_host->desc_base || !cq_host->trans_desc_base) if (!cq_host->trans_desc_base) {
dmam_free_coherent(mmc_dev(cq_host->mmc), cq_host->desc_size,
cq_host->desc_base,
cq_host->desc_dma_base);
cq_host->desc_base = NULL;
cq_host->desc_dma_base = 0;
return -ENOMEM; return -ENOMEM;
}
pr_debug("%s: cqhci: desc-base: 0x%p trans-base: 0x%p\n desc_dma 0x%llx trans_dma: 0x%llx\n", pr_debug("%s: cqhci: desc-base: 0x%p trans-base: 0x%p\n desc_dma 0x%llx trans_dma: 0x%llx\n",
mmc_hostname(cq_host->mmc), cq_host->desc_base, cq_host->trans_desc_base, mmc_hostname(cq_host->mmc), cq_host->desc_base, cq_host->trans_desc_base,

View File

@ -1450,6 +1450,7 @@ static int mmc_spi_probe(struct spi_device *spi)
mmc->caps &= ~MMC_CAP_NEEDS_POLL; mmc->caps &= ~MMC_CAP_NEEDS_POLL;
mmc_gpiod_request_cd_irq(mmc); mmc_gpiod_request_cd_irq(mmc);
} }
mmc_detect_change(mmc, 0);
/* Index 1 is write protect/read only */ /* Index 1 is write protect/read only */
status = mmc_gpiod_request_ro(mmc, NULL, 1, false, 0, NULL); status = mmc_gpiod_request_ro(mmc, NULL, 1, false, 0, NULL);

View File

@ -65,6 +65,7 @@ static const struct renesas_sdhi_of_data of_rcar_gen2_compatible = {
.scc_offset = 0x0300, .scc_offset = 0x0300,
.taps = rcar_gen2_scc_taps, .taps = rcar_gen2_scc_taps,
.taps_num = ARRAY_SIZE(rcar_gen2_scc_taps), .taps_num = ARRAY_SIZE(rcar_gen2_scc_taps),
.max_blk_count = 0xffffffff,
}; };
/* Definitions for sampling clocks */ /* Definitions for sampling clocks */

Some files were not shown because too many files have changed in this diff Show More