Merge d895ec7938
("Merge tag 'block-6.0-2022-09-02' of git://git.kernel.dk/linux-block") into android-mainline
Steps on the way to 6.0-rc4 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I520176d120a315099458f3cc41cf190afa201766
This commit is contained in:
commit
616cabd6df
@ -23,3 +23,4 @@ Block
|
||||
stat
|
||||
switching-sched
|
||||
writeback_cache_control
|
||||
ublk
|
||||
|
253
Documentation/block/ublk.rst
Normal file
253
Documentation/block/ublk.rst
Normal file
@ -0,0 +1,253 @@
|
||||
.. SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
===========================================
|
||||
Userspace block device driver (ublk driver)
|
||||
===========================================
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
ublk is a generic framework for implementing block device logic from userspace.
|
||||
The motivation behind it is that moving virtual block drivers into userspace,
|
||||
such as loop, nbd and similar can be very helpful. It can help to implement
|
||||
new virtual block device such as ublk-qcow2 (there are several attempts of
|
||||
implementing qcow2 driver in kernel).
|
||||
|
||||
Userspace block devices are attractive because:
|
||||
|
||||
- They can be written many programming languages.
|
||||
- They can use libraries that are not available in the kernel.
|
||||
- They can be debugged with tools familiar to application developers.
|
||||
- Crashes do not kernel panic the machine.
|
||||
- Bugs are likely to have a lower security impact than bugs in kernel
|
||||
code.
|
||||
- They can be installed and updated independently of the kernel.
|
||||
- They can be used to simulate block device easily with user specified
|
||||
parameters/setting for test/debug purpose
|
||||
|
||||
ublk block device (``/dev/ublkb*``) is added by ublk driver. Any IO request
|
||||
on the device will be forwarded to ublk userspace program. For convenience,
|
||||
in this document, ``ublk server`` refers to generic ublk userspace
|
||||
program. ``ublksrv`` [#userspace]_ is one of such implementation. It
|
||||
provides ``libublksrv`` [#userspace_lib]_ library for developing specific
|
||||
user block device conveniently, while also generic type block device is
|
||||
included, such as loop and null. Richard W.M. Jones wrote userspace nbd device
|
||||
``nbdublk`` [#userspace_nbdublk]_ based on ``libublksrv`` [#userspace_lib]_.
|
||||
|
||||
After the IO is handled by userspace, the result is committed back to the
|
||||
driver, thus completing the request cycle. This way, any specific IO handling
|
||||
logic is totally done by userspace, such as loop's IO handling, NBD's IO
|
||||
communication, or qcow2's IO mapping.
|
||||
|
||||
``/dev/ublkb*`` is driven by blk-mq request-based driver. Each request is
|
||||
assigned by one queue wide unique tag. ublk server assigns unique tag to each
|
||||
IO too, which is 1:1 mapped with IO of ``/dev/ublkb*``.
|
||||
|
||||
Both the IO request forward and IO handling result committing are done via
|
||||
``io_uring`` passthrough command; that is why ublk is also one io_uring based
|
||||
block driver. It has been observed that using io_uring passthrough command can
|
||||
give better IOPS than block IO; which is why ublk is one of high performance
|
||||
implementation of userspace block device: not only IO request communication is
|
||||
done by io_uring, but also the preferred IO handling in ublk server is io_uring
|
||||
based approach too.
|
||||
|
||||
ublk provides control interface to set/get ublk block device parameters.
|
||||
The interface is extendable and kabi compatible: basically any ublk request
|
||||
queue's parameter or ublk generic feature parameters can be set/get via the
|
||||
interface. Thus, ublk is generic userspace block device framework.
|
||||
For example, it is easy to setup a ublk device with specified block
|
||||
parameters from userspace.
|
||||
|
||||
Using ublk
|
||||
==========
|
||||
|
||||
ublk requires userspace ublk server to handle real block device logic.
|
||||
|
||||
Below is example of using ``ublksrv`` to provide ublk-based loop device.
|
||||
|
||||
- add a device::
|
||||
|
||||
ublk add -t loop -f ublk-loop.img
|
||||
|
||||
- format with xfs, then use it::
|
||||
|
||||
mkfs.xfs /dev/ublkb0
|
||||
mount /dev/ublkb0 /mnt
|
||||
# do anything. all IOs are handled by io_uring
|
||||
...
|
||||
umount /mnt
|
||||
|
||||
- list the devices with their info::
|
||||
|
||||
ublk list
|
||||
|
||||
- delete the device::
|
||||
|
||||
ublk del -a
|
||||
ublk del -n $ublk_dev_id
|
||||
|
||||
See usage details in README of ``ublksrv`` [#userspace_readme]_.
|
||||
|
||||
Design
|
||||
======
|
||||
|
||||
Control plane
|
||||
-------------
|
||||
|
||||
ublk driver provides global misc device node (``/dev/ublk-control``) for
|
||||
managing and controlling ublk devices with help of several control commands:
|
||||
|
||||
- ``UBLK_CMD_ADD_DEV``
|
||||
|
||||
Add a ublk char device (``/dev/ublkc*``) which is talked with ublk server
|
||||
WRT IO command communication. Basic device info is sent together with this
|
||||
command. It sets UAPI structure of ``ublksrv_ctrl_dev_info``,
|
||||
such as ``nr_hw_queues``, ``queue_depth``, and max IO request buffer size,
|
||||
for which the info is negotiated with the driver and sent back to the server.
|
||||
When this command is completed, the basic device info is immutable.
|
||||
|
||||
- ``UBLK_CMD_SET_PARAMS`` / ``UBLK_CMD_GET_PARAMS``
|
||||
|
||||
Set or get parameters of the device, which can be either generic feature
|
||||
related, or request queue limit related, but can't be IO logic specific,
|
||||
because the driver does not handle any IO logic. This command has to be
|
||||
sent before sending ``UBLK_CMD_START_DEV``.
|
||||
|
||||
- ``UBLK_CMD_START_DEV``
|
||||
|
||||
After the server prepares userspace resources (such as creating per-queue
|
||||
pthread & io_uring for handling ublk IO), this command is sent to the
|
||||
driver for allocating & exposing ``/dev/ublkb*``. Parameters set via
|
||||
``UBLK_CMD_SET_PARAMS`` are applied for creating the device.
|
||||
|
||||
- ``UBLK_CMD_STOP_DEV``
|
||||
|
||||
Halt IO on ``/dev/ublkb*`` and remove the device. When this command returns,
|
||||
ublk server will release resources (such as destroying per-queue pthread &
|
||||
io_uring).
|
||||
|
||||
- ``UBLK_CMD_DEL_DEV``
|
||||
|
||||
Remove ``/dev/ublkc*``. When this command returns, the allocated ublk device
|
||||
number can be reused.
|
||||
|
||||
- ``UBLK_CMD_GET_QUEUE_AFFINITY``
|
||||
|
||||
When ``/dev/ublkc`` is added, the driver creates block layer tagset, so
|
||||
that each queue's affinity info is available. The server sends
|
||||
``UBLK_CMD_GET_QUEUE_AFFINITY`` to retrieve queue affinity info. It can
|
||||
set up the per-queue context efficiently, such as bind affine CPUs with IO
|
||||
pthread and try to allocate buffers in IO thread context.
|
||||
|
||||
- ``UBLK_CMD_GET_DEV_INFO``
|
||||
|
||||
For retrieving device info via ``ublksrv_ctrl_dev_info``. It is the server's
|
||||
responsibility to save IO target specific info in userspace.
|
||||
|
||||
Data plane
|
||||
----------
|
||||
|
||||
ublk server needs to create per-queue IO pthread & io_uring for handling IO
|
||||
commands via io_uring passthrough. The per-queue IO pthread
|
||||
focuses on IO handling and shouldn't handle any control & management
|
||||
tasks.
|
||||
|
||||
The's IO is assigned by a unique tag, which is 1:1 mapping with IO
|
||||
request of ``/dev/ublkb*``.
|
||||
|
||||
UAPI structure of ``ublksrv_io_desc`` is defined for describing each IO from
|
||||
the driver. A fixed mmaped area (array) on ``/dev/ublkc*`` is provided for
|
||||
exporting IO info to the server; such as IO offset, length, OP/flags and
|
||||
buffer address. Each ``ublksrv_io_desc`` instance can be indexed via queue id
|
||||
and IO tag directly.
|
||||
|
||||
The following IO commands are communicated via io_uring passthrough command,
|
||||
and each command is only for forwarding the IO and committing the result
|
||||
with specified IO tag in the command data:
|
||||
|
||||
- ``UBLK_IO_FETCH_REQ``
|
||||
|
||||
Sent from the server IO pthread for fetching future incoming IO requests
|
||||
destined to ``/dev/ublkb*``. This command is sent only once from the server
|
||||
IO pthread for ublk driver to setup IO forward environment.
|
||||
|
||||
- ``UBLK_IO_COMMIT_AND_FETCH_REQ``
|
||||
|
||||
When an IO request is destined to ``/dev/ublkb*``, the driver stores
|
||||
the IO's ``ublksrv_io_desc`` to the specified mapped area; then the
|
||||
previous received IO command of this IO tag (either ``UBLK_IO_FETCH_REQ``
|
||||
or ``UBLK_IO_COMMIT_AND_FETCH_REQ)`` is completed, so the server gets
|
||||
the IO notification via io_uring.
|
||||
|
||||
After the server handles the IO, its result is committed back to the
|
||||
driver by sending ``UBLK_IO_COMMIT_AND_FETCH_REQ`` back. Once ublkdrv
|
||||
received this command, it parses the result and complete the request to
|
||||
``/dev/ublkb*``. In the meantime setup environment for fetching future
|
||||
requests with the same IO tag. That is, ``UBLK_IO_COMMIT_AND_FETCH_REQ``
|
||||
is reused for both fetching request and committing back IO result.
|
||||
|
||||
- ``UBLK_IO_NEED_GET_DATA``
|
||||
|
||||
With ``UBLK_F_NEED_GET_DATA`` enabled, the WRITE request will be firstly
|
||||
issued to ublk server without data copy. Then, IO backend of ublk server
|
||||
receives the request and it can allocate data buffer and embed its addr
|
||||
inside this new io command. After the kernel driver gets the command,
|
||||
data copy is done from request pages to this backend's buffer. Finally,
|
||||
backend receives the request again with data to be written and it can
|
||||
truly handle the request.
|
||||
|
||||
``UBLK_IO_NEED_GET_DATA`` adds one additional round-trip and one
|
||||
io_uring_enter() syscall. Any user thinks that it may lower performance
|
||||
should not enable UBLK_F_NEED_GET_DATA. ublk server pre-allocates IO
|
||||
buffer for each IO by default. Any new project should try to use this
|
||||
buffer to communicate with ublk driver. However, existing project may
|
||||
break or not able to consume the new buffer interface; that's why this
|
||||
command is added for backwards compatibility so that existing projects
|
||||
can still consume existing buffers.
|
||||
|
||||
- data copy between ublk server IO buffer and ublk block IO request
|
||||
|
||||
The driver needs to copy the block IO request pages into the server buffer
|
||||
(pages) first for WRITE before notifying the server of the coming IO, so
|
||||
that the server can handle WRITE request.
|
||||
|
||||
When the server handles READ request and sends
|
||||
``UBLK_IO_COMMIT_AND_FETCH_REQ`` to the server, ublkdrv needs to copy
|
||||
the server buffer (pages) read to the IO request pages.
|
||||
|
||||
Future development
|
||||
==================
|
||||
|
||||
Container-aware ublk deivice
|
||||
----------------------------
|
||||
|
||||
ublk driver doesn't handle any IO logic. Its function is well defined
|
||||
for now and very limited userspace interfaces are needed, which is also
|
||||
well defined too. It is possible to make ublk devices container-aware block
|
||||
devices in future as Stefan Hajnoczi suggested [#stefan]_, by removing
|
||||
ADMIN privilege.
|
||||
|
||||
Zero copy
|
||||
---------
|
||||
|
||||
Zero copy is a generic requirement for nbd, fuse or similar drivers. A
|
||||
problem [#xiaoguang]_ Xiaoguang mentioned is that pages mapped to userspace
|
||||
can't be remapped any more in kernel with existing mm interfaces. This can
|
||||
occurs when destining direct IO to ``/dev/ublkb*``. Also, he reported that
|
||||
big requests (IO size >= 256 KB) may benefit a lot from zero copy.
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
.. [#userspace] https://github.com/ming1/ubdsrv
|
||||
|
||||
.. [#userspace_lib] https://github.com/ming1/ubdsrv/tree/master/lib
|
||||
|
||||
.. [#userspace_nbdublk] https://gitlab.com/rwmjones/libnbd/-/tree/nbdublk
|
||||
|
||||
.. [#userspace_readme] https://github.com/ming1/ubdsrv/blob/master/README
|
||||
|
||||
.. [#stefan] https://lore.kernel.org/linux-block/YoOr6jBfgVm8GvWg@stefanha-x1.localdomain/
|
||||
|
||||
.. [#xiaoguang] https://lore.kernel.org/linux-block/YoOr6jBfgVm8GvWg@stefanha-x1.localdomain/
|
@ -24,8 +24,10 @@ properties:
|
||||
|
||||
interrupts:
|
||||
minItems: 1
|
||||
maxItems: 2
|
||||
description:
|
||||
Should be configured with type IRQ_TYPE_EDGE_RISING.
|
||||
If two interrupts are provided, expected order is INT1 and INT2.
|
||||
|
||||
required:
|
||||
- compatible
|
||||
|
@ -24,6 +24,7 @@ properties:
|
||||
- mediatek,mt2712-mtu3
|
||||
- mediatek,mt8173-mtu3
|
||||
- mediatek,mt8183-mtu3
|
||||
- mediatek,mt8188-mtu3
|
||||
- mediatek,mt8192-mtu3
|
||||
- mediatek,mt8195-mtu3
|
||||
- const: mediatek,mtu3
|
||||
|
@ -33,6 +33,7 @@ properties:
|
||||
- qcom,sm6115-dwc3
|
||||
- qcom,sm6125-dwc3
|
||||
- qcom,sm6350-dwc3
|
||||
- qcom,sm6375-dwc3
|
||||
- qcom,sm8150-dwc3
|
||||
- qcom,sm8250-dwc3
|
||||
- qcom,sm8350-dwc3
|
||||
@ -108,12 +109,17 @@ properties:
|
||||
HS/FS/LS modes are supported.
|
||||
type: boolean
|
||||
|
||||
wakeup-source: true
|
||||
|
||||
# Required child node:
|
||||
|
||||
patternProperties:
|
||||
"^usb@[0-9a-f]+$":
|
||||
$ref: snps,dwc3.yaml#
|
||||
|
||||
properties:
|
||||
wakeup-source: false
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
|
@ -67,7 +67,7 @@ The ``netdevsim`` driver supports rate objects management, which includes:
|
||||
- setting tx_share and tx_max rate values for any rate object type;
|
||||
- setting parent node for any rate object type.
|
||||
|
||||
Rate nodes and it's parameters are exposed in ``netdevsim`` debugfs in RO mode.
|
||||
Rate nodes and their parameters are exposed in ``netdevsim`` debugfs in RO mode.
|
||||
For example created rate node with name ``some_group``:
|
||||
|
||||
.. code:: shell
|
||||
|
@ -8,7 +8,7 @@ Transmit path guidelines:
|
||||
|
||||
1) The ndo_start_xmit method must not return NETDEV_TX_BUSY under
|
||||
any normal circumstances. It is considered a hard error unless
|
||||
there is no way your device can tell ahead of time when it's
|
||||
there is no way your device can tell ahead of time when its
|
||||
transmit function will become busy.
|
||||
|
||||
Instead it must maintain the queue properly. For example,
|
||||
|
@ -1035,7 +1035,10 @@ tcp_limit_output_bytes - INTEGER
|
||||
tcp_challenge_ack_limit - INTEGER
|
||||
Limits number of Challenge ACK sent per second, as recommended
|
||||
in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks)
|
||||
Default: 1000
|
||||
Note that this per netns rate limit can allow some side channel
|
||||
attacks and probably should not be enabled.
|
||||
TCP stack implements per TCP socket limits anyway.
|
||||
Default: INT_MAX (unlimited)
|
||||
|
||||
UDP variables
|
||||
=============
|
||||
|
@ -11,7 +11,7 @@ Initial Release:
|
||||
================
|
||||
This is conceptually very similar to the macvlan driver with one major
|
||||
exception of using L3 for mux-ing /demux-ing among slaves. This property makes
|
||||
the master device share the L2 with it's slave devices. I have developed this
|
||||
the master device share the L2 with its slave devices. I have developed this
|
||||
driver in conjunction with network namespaces and not sure if there is use case
|
||||
outside of it.
|
||||
|
||||
|
@ -530,7 +530,7 @@ its tunnel close actions. For L2TPIP sockets, the socket's close
|
||||
handler initiates the same tunnel close actions. All sessions are
|
||||
first closed. Each session drops its tunnel ref. When the tunnel ref
|
||||
reaches zero, the tunnel puts its socket ref. When the socket is
|
||||
eventually destroyed, it's sk_destruct finally frees the L2TP tunnel
|
||||
eventually destroyed, its sk_destruct finally frees the L2TP tunnel
|
||||
context.
|
||||
|
||||
Sessions
|
||||
|
@ -159,7 +159,7 @@ tools such as iproute2.
|
||||
|
||||
The switchdev driver can know a particular port's position in the topology by
|
||||
monitoring NETDEV_CHANGEUPPER notifications. For example, a port moved into a
|
||||
bond will see it's upper master change. If that bond is moved into a bridge,
|
||||
bond will see its upper master change. If that bond is moved into a bridge,
|
||||
the bond's upper master will change. And so on. The driver will track such
|
||||
movements to know what position a port is in in the overall topology by
|
||||
registering for netdevice events and acting on NETDEV_CHANGEUPPER.
|
||||
|
@ -20771,6 +20771,7 @@ UBLK USERSPACE BLOCK DRIVER
|
||||
M: Ming Lei <ming.lei@redhat.com>
|
||||
L: linux-block@vger.kernel.org
|
||||
S: Maintained
|
||||
F: Documentation/block/ublk.rst
|
||||
F: drivers/block/ublk_drv.c
|
||||
F: include/uapi/linux/ublk_cmd.h
|
||||
|
||||
|
@ -64,28 +64,28 @@
|
||||
#define EARLY_KASLR (0)
|
||||
#endif
|
||||
|
||||
#define EARLY_ENTRIES(vstart, vend, shift) \
|
||||
((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR)
|
||||
#define EARLY_ENTRIES(vstart, vend, shift, add) \
|
||||
((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + add)
|
||||
|
||||
#define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT))
|
||||
#define EARLY_PGDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT, add))
|
||||
|
||||
#if SWAPPER_PGTABLE_LEVELS > 3
|
||||
#define EARLY_PUDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PUD_SHIFT))
|
||||
#define EARLY_PUDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PUD_SHIFT, add))
|
||||
#else
|
||||
#define EARLY_PUDS(vstart, vend) (0)
|
||||
#define EARLY_PUDS(vstart, vend, add) (0)
|
||||
#endif
|
||||
|
||||
#if SWAPPER_PGTABLE_LEVELS > 2
|
||||
#define EARLY_PMDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, SWAPPER_TABLE_SHIFT))
|
||||
#define EARLY_PMDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, SWAPPER_TABLE_SHIFT, add))
|
||||
#else
|
||||
#define EARLY_PMDS(vstart, vend) (0)
|
||||
#define EARLY_PMDS(vstart, vend, add) (0)
|
||||
#endif
|
||||
|
||||
#define EARLY_PAGES(vstart, vend) ( 1 /* PGDIR page */ \
|
||||
+ EARLY_PGDS((vstart), (vend)) /* each PGDIR needs a next level page table */ \
|
||||
+ EARLY_PUDS((vstart), (vend)) /* each PUD needs a next level page table */ \
|
||||
+ EARLY_PMDS((vstart), (vend))) /* each PMD needs a next level page table */
|
||||
#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end))
|
||||
#define EARLY_PAGES(vstart, vend, add) ( 1 /* PGDIR page */ \
|
||||
+ EARLY_PGDS((vstart), (vend), add) /* each PGDIR needs a next level page table */ \
|
||||
+ EARLY_PUDS((vstart), (vend), add) /* each PUD needs a next level page table */ \
|
||||
+ EARLY_PMDS((vstart), (vend), add)) /* each PMD needs a next level page table */
|
||||
#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
|
||||
|
||||
/* the initial ID map may need two extra pages if it needs to be extended */
|
||||
#if VA_BITS < 48
|
||||
@ -93,7 +93,7 @@
|
||||
#else
|
||||
#define INIT_IDMAP_DIR_SIZE (INIT_IDMAP_DIR_PAGES * PAGE_SIZE)
|
||||
#endif
|
||||
#define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE)
|
||||
#define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
|
||||
|
||||
/* Initial memory map size */
|
||||
#if ARM64_KERNEL_USES_PMD_MAPS
|
||||
|
@ -371,7 +371,9 @@ SYM_FUNC_END(create_idmap)
|
||||
SYM_FUNC_START_LOCAL(create_kernel_mapping)
|
||||
adrp x0, init_pg_dir
|
||||
mov_q x5, KIMAGE_VADDR // compile time __va(_text)
|
||||
#ifdef CONFIG_RELOCATABLE
|
||||
add x5, x5, x23 // add KASLR displacement
|
||||
#endif
|
||||
adrp x6, _end // runtime __pa(_end)
|
||||
adrp x3, _text // runtime __pa(_text)
|
||||
sub x6, x6, x3 // _end - _text
|
||||
|
@ -47,7 +47,7 @@ static int prepare_elf_headers(void **addr, unsigned long *sz)
|
||||
u64 i;
|
||||
phys_addr_t start, end;
|
||||
|
||||
nr_ranges = 1; /* for exclusion of crashkernel region */
|
||||
nr_ranges = 2; /* for exclusion of crashkernel region */
|
||||
for_each_mem_range(i, &start, &end)
|
||||
nr_ranges++;
|
||||
|
||||
|
@ -1563,6 +1563,18 @@ static int binder_inc_ref_for_node(struct binder_proc *proc,
|
||||
}
|
||||
ret = binder_inc_ref_olocked(ref, strong, target_list);
|
||||
*rdata = ref->data;
|
||||
if (ret && ref == new_ref) {
|
||||
/*
|
||||
* Cleanup the failed reference here as the target
|
||||
* could now be dead and have already released its
|
||||
* references by now. Calling on the new reference
|
||||
* with strong=0 and a tmp_refs will not decrement
|
||||
* the node. The new_ref gets kfree'd below.
|
||||
*/
|
||||
binder_cleanup_ref_olocked(new_ref);
|
||||
ref = NULL;
|
||||
}
|
||||
|
||||
binder_proc_unlock(proc);
|
||||
if (new_ref && ref != new_ref)
|
||||
/*
|
||||
|
@ -322,7 +322,6 @@ static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
|
||||
*/
|
||||
if (vma) {
|
||||
vm_start = vma->vm_start;
|
||||
alloc->vma_vm_mm = vma->vm_mm;
|
||||
mmap_assert_write_locked(alloc->vma_vm_mm);
|
||||
} else {
|
||||
mmap_assert_locked(alloc->vma_vm_mm);
|
||||
@ -795,7 +794,6 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
|
||||
binder_insert_free_buffer(alloc, buffer);
|
||||
alloc->free_async_space = alloc->buffer_size / 2;
|
||||
binder_alloc_set_vma(alloc, vma);
|
||||
mmgrab(alloc->vma_vm_mm);
|
||||
|
||||
return 0;
|
||||
|
||||
@ -1091,6 +1089,8 @@ static struct shrinker binder_shrinker = {
|
||||
void binder_alloc_init(struct binder_alloc *alloc)
|
||||
{
|
||||
alloc->pid = current->group_leader->pid;
|
||||
alloc->vma_vm_mm = current->mm;
|
||||
mmgrab(alloc->vma_vm_mm);
|
||||
mutex_init(&alloc->mutex);
|
||||
INIT_LIST_HEAD(&alloc->buffers);
|
||||
}
|
||||
|
@ -735,7 +735,7 @@ void update_siblings_masks(unsigned int cpuid)
|
||||
int cpu, ret;
|
||||
|
||||
ret = detect_cache_attributes(cpuid);
|
||||
if (ret)
|
||||
if (ret && ret != -ENOENT)
|
||||
pr_info("Early cacheinfo failed, ret = %d\n", ret);
|
||||
|
||||
/* update core and thread sibling masks */
|
||||
|
@ -274,12 +274,42 @@ static int __init deferred_probe_timeout_setup(char *str)
|
||||
}
|
||||
__setup("deferred_probe_timeout=", deferred_probe_timeout_setup);
|
||||
|
||||
/**
|
||||
* driver_deferred_probe_check_state() - Check deferred probe state
|
||||
* @dev: device to check
|
||||
*
|
||||
* Return:
|
||||
* * -ENODEV if initcalls have completed and modules are disabled.
|
||||
* * -ETIMEDOUT if the deferred probe timeout was set and has expired
|
||||
* and modules are enabled.
|
||||
* * -EPROBE_DEFER in other cases.
|
||||
*
|
||||
* Drivers or subsystems can opt-in to calling this function instead of directly
|
||||
* returning -EPROBE_DEFER.
|
||||
*/
|
||||
int driver_deferred_probe_check_state(struct device *dev)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_MODULES) && initcalls_done) {
|
||||
dev_warn(dev, "ignoring dependency for device, assuming no driver\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (!driver_deferred_probe_timeout && initcalls_done) {
|
||||
dev_warn(dev, "deferred probe timeout, ignoring dependency\n");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(driver_deferred_probe_check_state);
|
||||
|
||||
static void deferred_probe_timeout_work_func(struct work_struct *work)
|
||||
{
|
||||
struct device_private *p;
|
||||
|
||||
fw_devlink_drivers_done();
|
||||
|
||||
driver_deferred_probe_timeout = 0;
|
||||
driver_deferred_probe_trigger();
|
||||
flush_work(&deferred_probe_work);
|
||||
|
||||
@ -881,6 +911,11 @@ static int __device_attach_driver(struct device_driver *drv, void *_data)
|
||||
dev_dbg(dev, "Device match requests probe deferral\n");
|
||||
dev->can_match = true;
|
||||
driver_deferred_probe_add(dev);
|
||||
/*
|
||||
* Device can't match with a driver right now, so don't attempt
|
||||
* to match or bind with other drivers on the bus.
|
||||
*/
|
||||
return ret;
|
||||
} else if (ret < 0) {
|
||||
dev_dbg(dev, "Bus failed to match device: %d\n", ret);
|
||||
return ret;
|
||||
@ -1120,6 +1155,11 @@ static int __driver_attach(struct device *dev, void *data)
|
||||
dev_dbg(dev, "Device match requests probe deferral\n");
|
||||
dev->can_match = true;
|
||||
driver_deferred_probe_add(dev);
|
||||
/*
|
||||
* Driver could not match with device, but may match with
|
||||
* another device on the bus.
|
||||
*/
|
||||
return 0;
|
||||
} else if (ret < 0) {
|
||||
dev_dbg(dev, "Bus failed to match device: %d\n", ret);
|
||||
return ret;
|
||||
|
@ -93,10 +93,9 @@ static void fw_dev_release(struct device *dev)
|
||||
{
|
||||
struct fw_sysfs *fw_sysfs = to_fw_sysfs(dev);
|
||||
|
||||
if (fw_sysfs->fw_upload_priv) {
|
||||
free_fw_priv(fw_sysfs->fw_priv);
|
||||
kfree(fw_sysfs->fw_upload_priv);
|
||||
}
|
||||
if (fw_sysfs->fw_upload_priv)
|
||||
fw_upload_free(fw_sysfs);
|
||||
|
||||
kfree(fw_sysfs);
|
||||
}
|
||||
|
||||
|
@ -106,12 +106,17 @@ extern struct device_attribute dev_attr_cancel;
|
||||
extern struct device_attribute dev_attr_remaining_size;
|
||||
|
||||
int fw_upload_start(struct fw_sysfs *fw_sysfs);
|
||||
void fw_upload_free(struct fw_sysfs *fw_sysfs);
|
||||
umode_t fw_upload_is_visible(struct kobject *kobj, struct attribute *attr, int n);
|
||||
#else
|
||||
static inline int fw_upload_start(struct fw_sysfs *fw_sysfs)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void fw_upload_free(struct fw_sysfs *fw_sysfs)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* __FIRMWARE_SYSFS_H */
|
||||
|
@ -264,6 +264,15 @@ int fw_upload_start(struct fw_sysfs *fw_sysfs)
|
||||
return 0;
|
||||
}
|
||||
|
||||
void fw_upload_free(struct fw_sysfs *fw_sysfs)
|
||||
{
|
||||
struct fw_upload_priv *fw_upload_priv = fw_sysfs->fw_upload_priv;
|
||||
|
||||
free_fw_priv(fw_sysfs->fw_priv);
|
||||
kfree(fw_upload_priv->fw_upload);
|
||||
kfree(fw_upload_priv);
|
||||
}
|
||||
|
||||
/**
|
||||
* firmware_upload_register() - register for the firmware upload sysfs API
|
||||
* @module: kernel module of this device
|
||||
@ -377,6 +386,7 @@ void firmware_upload_unregister(struct fw_upload *fw_upload)
|
||||
{
|
||||
struct fw_sysfs *fw_sysfs = fw_upload->priv;
|
||||
struct fw_upload_priv *fw_upload_priv = fw_sysfs->fw_upload_priv;
|
||||
struct module *module = fw_upload_priv->module;
|
||||
|
||||
mutex_lock(&fw_upload_priv->lock);
|
||||
if (fw_upload_priv->progress == FW_UPLOAD_PROG_IDLE) {
|
||||
@ -392,6 +402,6 @@ void firmware_upload_unregister(struct fw_upload *fw_upload)
|
||||
|
||||
unregister:
|
||||
device_unregister(&fw_sysfs->dev);
|
||||
module_put(fw_upload_priv->module);
|
||||
module_put(module);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(firmware_upload_unregister);
|
||||
|
@ -2733,7 +2733,7 @@ static int __genpd_dev_pm_attach(struct device *dev, struct device *base_dev,
|
||||
mutex_unlock(&gpd_list_lock);
|
||||
dev_dbg(dev, "%s() failed to find PM domain: %ld\n",
|
||||
__func__, PTR_ERR(pd));
|
||||
return -ENODEV;
|
||||
return driver_deferred_probe_check_state(base_dev);
|
||||
}
|
||||
|
||||
dev_dbg(dev, "adding to PM domain %s\n", pd->name);
|
||||
|
@ -430,12 +430,25 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
|
||||
{
|
||||
struct mhi_event *mhi_event = dev;
|
||||
struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
|
||||
struct mhi_event_ctxt *er_ctxt =
|
||||
&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
|
||||
struct mhi_event_ctxt *er_ctxt;
|
||||
struct mhi_ring *ev_ring = &mhi_event->ring;
|
||||
dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
|
||||
dma_addr_t ptr;
|
||||
void *dev_rp;
|
||||
|
||||
/*
|
||||
* If CONFIG_DEBUG_SHIRQ is set, the IRQ handler will get invoked during __free_irq()
|
||||
* and by that time mhi_ctxt() would've freed. So check for the existence of mhi_ctxt
|
||||
* before handling the IRQs.
|
||||
*/
|
||||
if (!mhi_cntrl->mhi_ctxt) {
|
||||
dev_dbg(&mhi_cntrl->mhi_dev->dev,
|
||||
"mhi_ctxt has been freed\n");
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
|
||||
ptr = le64_to_cpu(er_ctxt->rp);
|
||||
|
||||
if (!is_valid_ring_ptr(ev_ring, ptr)) {
|
||||
dev_err(&mhi_cntrl->mhi_dev->dev,
|
||||
"Event ring rp points outside of the event ring\n");
|
||||
|
@ -480,6 +480,11 @@ static ssize_t splice_write_null(struct pipe_inode_info *pipe, struct file *out,
|
||||
return splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_null);
|
||||
}
|
||||
|
||||
static int uring_cmd_null(struct io_uring_cmd *ioucmd, unsigned int issue_flags)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t read_iter_zero(struct kiocb *iocb, struct iov_iter *iter)
|
||||
{
|
||||
size_t written = 0;
|
||||
@ -663,6 +668,7 @@ static const struct file_operations null_fops = {
|
||||
.read_iter = read_iter_null,
|
||||
.write_iter = write_iter_null,
|
||||
.splice_write = splice_write_null,
|
||||
.uring_cmd = uring_cmd_null,
|
||||
};
|
||||
|
||||
static const struct file_operations __maybe_unused port_fops = {
|
||||
|
@ -295,7 +295,8 @@ void dma_resv_add_fence(struct dma_resv *obj, struct dma_fence *fence,
|
||||
enum dma_resv_usage old_usage;
|
||||
|
||||
dma_resv_list_entry(fobj, i, obj, &old, &old_usage);
|
||||
if ((old->context == fence->context && old_usage >= usage) ||
|
||||
if ((old->context == fence->context && old_usage >= usage &&
|
||||
dma_fence_is_later(fence, old)) ||
|
||||
dma_fence_is_signaled(old)) {
|
||||
dma_resv_list_set(fobj, i, fence, usage);
|
||||
dma_fence_put(old);
|
||||
|
@ -5524,7 +5524,8 @@ bool amdgpu_device_is_peer_accessible(struct amdgpu_device *adev,
|
||||
~*peer_adev->dev->dma_mask : ~((1ULL << 32) - 1);
|
||||
resource_size_t aper_limit =
|
||||
adev->gmc.aper_base + adev->gmc.aper_size - 1;
|
||||
bool p2p_access = !(pci_p2pdma_distance_many(adev->pdev,
|
||||
bool p2p_access = !adev->gmc.xgmi.connected_to_cpu &&
|
||||
!(pci_p2pdma_distance_many(adev->pdev,
|
||||
&peer_adev->dev, 1, true) < 0);
|
||||
|
||||
return pcie_p2p && p2p_access && (adev->gmc.visible_vram_size &&
|
||||
|
@ -66,10 +66,15 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev)
|
||||
return true;
|
||||
case CHIP_SIENNA_CICHLID:
|
||||
if (strnstr(atom_ctx->vbios_version, "D603",
|
||||
sizeof(atom_ctx->vbios_version))) {
|
||||
if (strnstr(atom_ctx->vbios_version, "D603GLXE",
|
||||
sizeof(atom_ctx->vbios_version)))
|
||||
return true;
|
||||
else
|
||||
return false;
|
||||
else
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
|
@ -159,7 +159,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
|
||||
amdgpu_sync_free(&job->sync);
|
||||
amdgpu_sync_free(&job->sched_sync);
|
||||
|
||||
dma_fence_put(&job->hw_fence);
|
||||
if (!job->hw_fence.ops)
|
||||
kfree(job);
|
||||
else
|
||||
dma_fence_put(&job->hw_fence);
|
||||
}
|
||||
|
||||
int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
|
||||
|
@ -2401,7 +2401,7 @@ static int psp_load_smu_fw(struct psp_context *psp)
|
||||
static bool fw_load_skip_check(struct psp_context *psp,
|
||||
struct amdgpu_firmware_info *ucode)
|
||||
{
|
||||
if (!ucode->fw)
|
||||
if (!ucode->fw || !ucode->ucode_size)
|
||||
return true;
|
||||
|
||||
if (ucode->ucode_id == AMDGPU_UCODE_ID_SMC &&
|
||||
|
@ -4274,35 +4274,45 @@ static int gfx_v10_0_init_microcode(struct amdgpu_device *adev)
|
||||
|
||||
}
|
||||
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_GLOBAL_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_GLOBAL_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.global_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
if (adev->gfx.rlc.global_tap_delays_ucode_size_bytes) {
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_GLOBAL_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_GLOBAL_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.global_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
}
|
||||
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE0_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE0_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se0_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
if (adev->gfx.rlc.se0_tap_delays_ucode_size_bytes) {
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE0_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE0_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se0_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
}
|
||||
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE1_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE1_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se1_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
if (adev->gfx.rlc.se1_tap_delays_ucode_size_bytes) {
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE1_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE1_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se1_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
}
|
||||
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE2_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE2_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se2_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
if (adev->gfx.rlc.se2_tap_delays_ucode_size_bytes) {
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE2_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE2_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se2_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
}
|
||||
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE3_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE3_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se3_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
if (adev->gfx.rlc.se3_tap_delays_ucode_size_bytes) {
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SE3_TAP_DELAYS];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_SE3_TAP_DELAYS;
|
||||
info->fw = adev->gfx.rlc_fw;
|
||||
adev->firmware.fw_size +=
|
||||
ALIGN(adev->gfx.rlc.se3_tap_delays_ucode_size_bytes, PAGE_SIZE);
|
||||
}
|
||||
|
||||
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC1];
|
||||
info->ucode_id = AMDGPU_UCODE_ID_CP_MEC1;
|
||||
|
@ -183,6 +183,7 @@ static int mes_v11_0_add_hw_queue(struct amdgpu_mes *mes,
|
||||
mes_add_queue_pkt.trap_handler_addr = input->tba_addr;
|
||||
mes_add_queue_pkt.tma_addr = input->tma_addr;
|
||||
mes_add_queue_pkt.is_kfd_process = input->is_kfd_process;
|
||||
mes_add_queue_pkt.trap_en = 1;
|
||||
|
||||
return mes_v11_0_submit_pkt_and_poll_completion(mes,
|
||||
&mes_add_queue_pkt, sizeof(mes_add_queue_pkt),
|
||||
|
@ -1094,7 +1094,8 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
|
||||
dc->current_state->stream_count != context->stream_count)
|
||||
should_disable = true;
|
||||
|
||||
if (old_stream && !dc->current_state->res_ctx.pipe_ctx[i].top_pipe) {
|
||||
if (old_stream && !dc->current_state->res_ctx.pipe_ctx[i].top_pipe &&
|
||||
!dc->current_state->res_ctx.pipe_ctx[i].prev_odm_pipe) {
|
||||
struct pipe_ctx *old_pipe, *new_pipe;
|
||||
|
||||
old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
|
||||
|
@ -104,6 +104,9 @@ static bool has_query_dp_alt(struct link_encoder *enc)
|
||||
{
|
||||
struct dc_dmub_srv *dc_dmub_srv = enc->ctx->dmub_srv;
|
||||
|
||||
if (enc->ctx->dce_version >= DCN_VERSION_3_15)
|
||||
return true;
|
||||
|
||||
/* Supports development firmware and firmware >= 4.0.11 */
|
||||
return dc_dmub_srv &&
|
||||
!(dc_dmub_srv->dmub->fw_version >= DMUB_FW_VERSION(4, 0, 0) &&
|
||||
|
@ -317,6 +317,7 @@ static void enc314_stream_encoder_dp_unblank(
|
||||
/* switch DP encoder to CRTC data, but reset it the fifo first. It may happen
|
||||
* that it overflows during mode transition, and sometimes doesn't recover.
|
||||
*/
|
||||
REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_READ_START_LEVEL, 0x7);
|
||||
REG_UPDATE(DP_STEER_FIFO, DP_STEER_FIFO_RESET, 1);
|
||||
udelay(10);
|
||||
|
||||
|
@ -98,7 +98,8 @@ static void optc314_set_odm_combine(struct timing_generator *optc, int *opp_id,
|
||||
REG_UPDATE(OPTC_WIDTH_CONTROL,
|
||||
OPTC_SEGMENT_WIDTH, mpcc_hactive);
|
||||
|
||||
REG_SET(OTG_H_TIMING_CNTL, 0, OTG_H_TIMING_DIV_MODE, opp_cnt - 1);
|
||||
REG_UPDATE(OTG_H_TIMING_CNTL,
|
||||
OTG_H_TIMING_DIV_MODE, opp_cnt - 1);
|
||||
optc1->opp_count = opp_cnt;
|
||||
}
|
||||
|
||||
|
@ -454,6 +454,7 @@ static const struct dcn31_hpo_dp_stream_encoder_registers hpo_dp_stream_enc_regs
|
||||
hpo_dp_stream_encoder_reg_list(0),
|
||||
hpo_dp_stream_encoder_reg_list(1),
|
||||
hpo_dp_stream_encoder_reg_list(2),
|
||||
hpo_dp_stream_encoder_reg_list(3)
|
||||
};
|
||||
|
||||
static const struct dcn31_hpo_dp_stream_encoder_shift hpo_dp_se_shift = {
|
||||
|
@ -225,19 +225,19 @@ void dccg32_set_dpstreamclk(
|
||||
case 0:
|
||||
REG_UPDATE_2(DPSTREAMCLK_CNTL,
|
||||
DPSTREAMCLK0_EN,
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK0_SRC_SEL, 0);
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK0_SRC_SEL, otg_inst);
|
||||
break;
|
||||
case 1:
|
||||
REG_UPDATE_2(DPSTREAMCLK_CNTL, DPSTREAMCLK1_EN,
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK1_SRC_SEL, 1);
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK1_SRC_SEL, otg_inst);
|
||||
break;
|
||||
case 2:
|
||||
REG_UPDATE_2(DPSTREAMCLK_CNTL, DPSTREAMCLK2_EN,
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK2_SRC_SEL, 2);
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK2_SRC_SEL, otg_inst);
|
||||
break;
|
||||
case 3:
|
||||
REG_UPDATE_2(DPSTREAMCLK_CNTL, DPSTREAMCLK3_EN,
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK3_SRC_SEL, 3);
|
||||
(src == REFCLK) ? 0 : 1, DPSTREAMCLK3_SRC_SEL, otg_inst);
|
||||
break;
|
||||
default:
|
||||
BREAK_TO_DEBUGGER();
|
||||
|
@ -310,6 +310,11 @@ static void enc32_stream_encoder_dp_unblank(
|
||||
// TODO: Confirm if we need to wait for DIG_SYMCLK_FE_ON
|
||||
REG_WAIT(DIG_FE_CNTL, DIG_SYMCLK_FE_ON, 1, 10, 5000);
|
||||
|
||||
/* read start level = 0 will bring underflow / overflow and DIG_FIFO_ERROR = 1
|
||||
* so set it to 1/2 full = 7 before reset as suggested by hardware team.
|
||||
*/
|
||||
REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_READ_START_LEVEL, 0x7);
|
||||
|
||||
REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_RESET, 1);
|
||||
|
||||
REG_WAIT(DIG_FIFO_CTRL0, DIG_FIFO_RESET_DONE, 1, 10, 5000);
|
||||
|
@ -295,24 +295,38 @@ static uint32_t dcn32_calculate_cab_allocation(struct dc *dc, struct dc_state *c
|
||||
}
|
||||
|
||||
// Include cursor size for CAB allocation
|
||||
if (stream->cursor_position.enable && plane->address.grph.cursor_cache_addr.quad_part) {
|
||||
cursor_size = dc->caps.max_cursor_size * dc->caps.max_cursor_size;
|
||||
switch (stream->cursor_attributes.color_format) {
|
||||
case CURSOR_MODE_MONO:
|
||||
cursor_size /= 2;
|
||||
break;
|
||||
case CURSOR_MODE_COLOR_1BIT_AND:
|
||||
case CURSOR_MODE_COLOR_PRE_MULTIPLIED_ALPHA:
|
||||
case CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA:
|
||||
cursor_size *= 4;
|
||||
break;
|
||||
for (j = 0; j < dc->res_pool->pipe_count; j++) {
|
||||
struct pipe_ctx *pipe = &ctx->res_ctx.pipe_ctx[j];
|
||||
struct hubp *hubp = pipe->plane_res.hubp;
|
||||
|
||||
case CURSOR_MODE_COLOR_64BIT_FP_PRE_MULTIPLIED:
|
||||
case CURSOR_MODE_COLOR_64BIT_FP_UN_PRE_MULTIPLIED:
|
||||
cursor_size *= 8;
|
||||
break;
|
||||
}
|
||||
cache_lines_used += dcn32_cache_lines_for_surface(dc, surface_size,
|
||||
if (pipe->stream && pipe->plane_state && hubp)
|
||||
/* Find the cursor plane and use the exact size instead of
|
||||
* using the max for calculation
|
||||
*/
|
||||
if (hubp->curs_attr.width > 0) {
|
||||
cursor_size = hubp->curs_attr.width * hubp->curs_attr.height;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
switch (stream->cursor_attributes.color_format) {
|
||||
case CURSOR_MODE_MONO:
|
||||
cursor_size /= 2;
|
||||
break;
|
||||
case CURSOR_MODE_COLOR_1BIT_AND:
|
||||
case CURSOR_MODE_COLOR_PRE_MULTIPLIED_ALPHA:
|
||||
case CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA:
|
||||
cursor_size *= 4;
|
||||
break;
|
||||
|
||||
case CURSOR_MODE_COLOR_64BIT_FP_PRE_MULTIPLIED:
|
||||
case CURSOR_MODE_COLOR_64BIT_FP_UN_PRE_MULTIPLIED:
|
||||
cursor_size *= 8;
|
||||
break;
|
||||
}
|
||||
|
||||
if (stream->cursor_position.enable && plane->address.grph.cursor_cache_addr.quad_part) {
|
||||
cache_lines_used += dcn32_cache_lines_for_surface(dc, cursor_size,
|
||||
plane->address.grph.cursor_cache_addr.quad_part);
|
||||
}
|
||||
}
|
||||
@ -325,6 +339,26 @@ static uint32_t dcn32_calculate_cab_allocation(struct dc *dc, struct dc_state *c
|
||||
if (cache_lines_used % lines_per_way > 0)
|
||||
num_ways++;
|
||||
|
||||
for (i = 0; i < ctx->stream_count; i++) {
|
||||
stream = ctx->streams[i];
|
||||
for (j = 0; j < ctx->stream_status[i].plane_count; j++) {
|
||||
plane = ctx->stream_status[i].plane_states[j];
|
||||
|
||||
if (stream->cursor_position.enable && plane &&
|
||||
!plane->address.grph.cursor_cache_addr.quad_part &&
|
||||
cursor_size > 16384) {
|
||||
/* Cursor caching is not supported since it won't be on the same line.
|
||||
* So we need an extra line to accommodate it. With large cursors and a single 4k monitor
|
||||
* this case triggers corruption. If we're at the edge, then dont trigger display refresh
|
||||
* from MALL. We only need to cache cursor if its greater that 64x64 at 4 bpp.
|
||||
*/
|
||||
num_ways++;
|
||||
/* We only expect one cursor plane */
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return num_ways;
|
||||
}
|
||||
|
||||
|
@ -144,7 +144,7 @@ bool dcn32_all_pipes_have_stream_and_plane(struct dc *dc,
|
||||
struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
|
||||
|
||||
if (!pipe->stream)
|
||||
continue;
|
||||
return false;
|
||||
|
||||
if (!pipe->plane_state)
|
||||
return false;
|
||||
|
@ -1014,6 +1014,15 @@ static void dcn32_full_validate_bw_helper(struct dc *dc,
|
||||
dc->debug.force_subvp_mclk_switch)) {
|
||||
|
||||
dcn32_merge_pipes_for_subvp(dc, context);
|
||||
// to re-initialize viewport after the pipe merge
|
||||
for (int i = 0; i < dc->res_pool->pipe_count; i++) {
|
||||
struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
|
||||
|
||||
if (!pipe_ctx->plane_state || !pipe_ctx->stream)
|
||||
continue;
|
||||
|
||||
resource_build_scaling_params(pipe_ctx);
|
||||
}
|
||||
|
||||
while (!found_supported_config && dcn32_enough_pipes_for_subvp(dc, context) &&
|
||||
dcn32_assign_subvp_pipe(dc, context, &dc_pipe_idx)) {
|
||||
|
@ -116,7 +116,7 @@ static void setup_hpo_dp_stream_encoder(struct pipe_ctx *pipe_ctx)
|
||||
dto_params.timing = &pipe_ctx->stream->timing;
|
||||
dto_params.ref_dtbclk_khz = dc->clk_mgr->funcs->get_dtb_ref_clk_frequency(dc->clk_mgr);
|
||||
|
||||
dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, link_enc->inst);
|
||||
dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, stream_enc->inst);
|
||||
dccg->funcs->enable_symclk32_se(dccg, stream_enc->inst, phyd32clk);
|
||||
dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
|
||||
stream_enc->funcs->enable_stream(stream_enc);
|
||||
@ -137,7 +137,7 @@ static void reset_hpo_dp_stream_encoder(struct pipe_ctx *pipe_ctx)
|
||||
stream_enc->funcs->disable(stream_enc);
|
||||
dccg->funcs->set_dtbclk_dto(dccg, &dto_params);
|
||||
dccg->funcs->disable_symclk32_se(dccg, stream_enc->inst);
|
||||
dccg->funcs->set_dpstreamclk(dccg, REFCLK, tg->inst, pipe_ctx->link_res.hpo_dp_link_enc->inst);
|
||||
dccg->funcs->set_dpstreamclk(dccg, REFCLK, tg->inst, stream_enc->inst);
|
||||
}
|
||||
|
||||
static void setup_hpo_dp_stream_attribute(struct pipe_ctx *pipe_ctx)
|
||||
|
@ -268,7 +268,8 @@ union MESAPI__ADD_QUEUE {
|
||||
uint32_t is_tmz_queue : 1;
|
||||
uint32_t map_kiq_utility_queue : 1;
|
||||
uint32_t is_kfd_process : 1;
|
||||
uint32_t reserved : 22;
|
||||
uint32_t trap_en : 1;
|
||||
uint32_t reserved : 21;
|
||||
};
|
||||
struct MES_API_STATUS api_status;
|
||||
uint64_t tma_addr;
|
||||
|
@ -25,7 +25,7 @@
|
||||
#define SMU13_DRIVER_IF_V13_0_0_H
|
||||
|
||||
//Increment this version if SkuTable_t or BoardTable_t change
|
||||
#define PPTABLE_VERSION 0x22
|
||||
#define PPTABLE_VERSION 0x24
|
||||
|
||||
#define NUM_GFXCLK_DPM_LEVELS 16
|
||||
#define NUM_SOCCLK_DPM_LEVELS 8
|
||||
|
@ -30,7 +30,7 @@
|
||||
#define SMU13_DRIVER_IF_VERSION_ALDE 0x08
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x05
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_5 0x04
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0 0x2E
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0 0x30
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_7 0x2C
|
||||
|
||||
#define SMU13_MODE1_RESET_WAIT_TIME_IN_MS 500 //500ms
|
||||
@ -291,5 +291,11 @@ int smu_v13_0_set_default_dpm_tables(struct smu_context *smu);
|
||||
void smu_v13_0_set_smu_mailbox_registers(struct smu_context *smu);
|
||||
|
||||
int smu_v13_0_mode1_reset(struct smu_context *smu);
|
||||
|
||||
int smu_v13_0_get_pptable_from_firmware(struct smu_context *smu,
|
||||
void **table,
|
||||
uint32_t *size,
|
||||
uint32_t pptable_id);
|
||||
|
||||
#endif
|
||||
#endif
|
||||
|
@ -84,9 +84,6 @@ MODULE_FIRMWARE("amdgpu/smu_13_0_7.bin");
|
||||
static const int link_width[] = {0, 1, 2, 4, 8, 12, 16};
|
||||
static const int link_speed[] = {25, 50, 80, 160};
|
||||
|
||||
static int smu_v13_0_get_pptable_from_firmware(struct smu_context *smu, void **table, uint32_t *size,
|
||||
uint32_t pptable_id);
|
||||
|
||||
int smu_v13_0_init_microcode(struct smu_context *smu)
|
||||
{
|
||||
struct amdgpu_device *adev = smu->adev;
|
||||
@ -224,23 +221,19 @@ int smu_v13_0_init_pptable_microcode(struct smu_context *smu)
|
||||
|
||||
/*
|
||||
* Temporary solution for SMU V13.0.0 with SCPM enabled:
|
||||
* - use 36831 signed pptable when pp_table_id is 3683
|
||||
* - use 37151 signed pptable when pp_table_id is 3715
|
||||
* - use 36641 signed pptable when pp_table_id is 3664 or 0
|
||||
* TODO: drop these when the pptable carried in vbios is ready.
|
||||
* - use vbios carried pptable when pptable_id is 3664, 3715 or 3795
|
||||
* - use 36831 soft pptable when pptable_id is 3683
|
||||
*/
|
||||
if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) {
|
||||
switch (pptable_id) {
|
||||
case 0:
|
||||
case 3664:
|
||||
pptable_id = 36641;
|
||||
case 3715:
|
||||
case 3795:
|
||||
pptable_id = 0;
|
||||
break;
|
||||
case 3683:
|
||||
pptable_id = 36831;
|
||||
break;
|
||||
case 3715:
|
||||
pptable_id = 37151;
|
||||
break;
|
||||
default:
|
||||
dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id);
|
||||
return -EINVAL;
|
||||
@ -425,8 +418,10 @@ static int smu_v13_0_get_pptable_from_vbios(struct smu_context *smu, void **tabl
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_get_pptable_from_firmware(struct smu_context *smu, void **table, uint32_t *size,
|
||||
uint32_t pptable_id)
|
||||
int smu_v13_0_get_pptable_from_firmware(struct smu_context *smu,
|
||||
void **table,
|
||||
uint32_t *size,
|
||||
uint32_t pptable_id)
|
||||
{
|
||||
const struct smc_firmware_header_v1_0 *hdr;
|
||||
struct amdgpu_device *adev = smu->adev;
|
||||
|
@ -388,11 +388,29 @@ static int smu_v13_0_0_append_powerplay_table(struct smu_context *smu)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_0_setup_pptable(struct smu_context *smu)
|
||||
static int smu_v13_0_0_get_pptable_from_pmfw(struct smu_context *smu,
|
||||
void **table,
|
||||
uint32_t *size)
|
||||
{
|
||||
struct smu_table_context *smu_table = &smu->smu_table;
|
||||
void *combo_pptable = smu_table->combo_pptable;
|
||||
int ret = 0;
|
||||
|
||||
ret = smu_cmn_get_combo_pptable(smu);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
*table = combo_pptable;
|
||||
*size = sizeof(struct smu_13_0_0_powerplay_table);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_0_setup_pptable(struct smu_context *smu)
|
||||
{
|
||||
struct smu_table_context *smu_table = &smu->smu_table;
|
||||
struct amdgpu_device *adev = smu->adev;
|
||||
uint32_t pptable_id;
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
@ -401,17 +419,51 @@ static int smu_v13_0_0_setup_pptable(struct smu_context *smu)
|
||||
* rely on the combo pptable(and its revelant SMU message).
|
||||
*/
|
||||
if (adev->scpm_enabled) {
|
||||
ret = smu_cmn_get_combo_pptable(smu);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
smu->smu_table.power_play_table = combo_pptable;
|
||||
smu->smu_table.power_play_table_size = sizeof(struct smu_13_0_0_powerplay_table);
|
||||
ret = smu_v13_0_0_get_pptable_from_pmfw(smu,
|
||||
&smu_table->power_play_table,
|
||||
&smu_table->power_play_table_size);
|
||||
} else {
|
||||
ret = smu_v13_0_setup_pptable(smu);
|
||||
if (ret)
|
||||
return ret;
|
||||
/* override pptable_id from driver parameter */
|
||||
if (amdgpu_smu_pptable_id >= 0) {
|
||||
pptable_id = amdgpu_smu_pptable_id;
|
||||
dev_info(adev->dev, "override pptable id %d\n", pptable_id);
|
||||
} else {
|
||||
pptable_id = smu_table->boot_values.pp_table_id;
|
||||
}
|
||||
|
||||
/*
|
||||
* Temporary solution for SMU V13.0.0 with SCPM disabled:
|
||||
* - use vbios carried pptable when pptable_id is 3664, 3715 or 3795
|
||||
* - use soft pptable when pptable_id is 3683
|
||||
*/
|
||||
if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) {
|
||||
switch (pptable_id) {
|
||||
case 3664:
|
||||
case 3715:
|
||||
case 3795:
|
||||
pptable_id = 0;
|
||||
break;
|
||||
case 3683:
|
||||
break;
|
||||
default:
|
||||
dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
/* force using vbios pptable in sriov mode */
|
||||
if ((amdgpu_sriov_vf(adev) || !pptable_id) && (amdgpu_emu_mode != 1))
|
||||
ret = smu_v13_0_0_get_pptable_from_pmfw(smu,
|
||||
&smu_table->power_play_table,
|
||||
&smu_table->power_play_table_size);
|
||||
else
|
||||
ret = smu_v13_0_get_pptable_from_firmware(smu,
|
||||
&smu_table->power_play_table,
|
||||
&smu_table->power_play_table_size,
|
||||
pptable_id);
|
||||
}
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = smu_v13_0_0_store_powerplay_table(smu);
|
||||
if (ret)
|
||||
|
@ -400,11 +400,27 @@ static int smu_v13_0_7_append_powerplay_table(struct smu_context *smu)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_7_get_pptable_from_pmfw(struct smu_context *smu,
|
||||
void **table,
|
||||
uint32_t *size)
|
||||
{
|
||||
struct smu_table_context *smu_table = &smu->smu_table;
|
||||
void *combo_pptable = smu_table->combo_pptable;
|
||||
int ret = 0;
|
||||
|
||||
ret = smu_cmn_get_combo_pptable(smu);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
*table = combo_pptable;
|
||||
*size = sizeof(struct smu_13_0_7_powerplay_table);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_7_setup_pptable(struct smu_context *smu)
|
||||
{
|
||||
struct smu_table_context *smu_table = &smu->smu_table;
|
||||
void *combo_pptable = smu_table->combo_pptable;
|
||||
struct amdgpu_device *adev = smu->adev;
|
||||
int ret = 0;
|
||||
|
||||
@ -413,18 +429,11 @@ static int smu_v13_0_7_setup_pptable(struct smu_context *smu)
|
||||
* be used directly by driver. To get the raw pptable, we need to
|
||||
* rely on the combo pptable(and its revelant SMU message).
|
||||
*/
|
||||
if (adev->scpm_enabled) {
|
||||
ret = smu_cmn_get_combo_pptable(smu);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
smu->smu_table.power_play_table = combo_pptable;
|
||||
smu->smu_table.power_play_table_size = sizeof(struct smu_13_0_7_powerplay_table);
|
||||
} else {
|
||||
ret = smu_v13_0_setup_pptable(smu);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
ret = smu_v13_0_7_get_pptable_from_pmfw(smu,
|
||||
&smu_table->power_play_table,
|
||||
&smu_table->power_play_table_size);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = smu_v13_0_7_store_powerplay_table(smu);
|
||||
if (ret)
|
||||
|
@ -2070,7 +2070,14 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
|
||||
else
|
||||
intel_dsi->ports = BIT(port);
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
|
||||
intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
|
||||
|
||||
intel_dsi->dcs_backlight_ports = intel_connector->panel.vbt.dsi.bl_ports;
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
|
||||
intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
|
||||
|
||||
intel_dsi->dcs_cabc_ports = intel_connector->panel.vbt.dsi.cabc_ports;
|
||||
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include "intel_dsi_dcs_backlight.h"
|
||||
#include "intel_panel.h"
|
||||
#include "intel_pci_config.h"
|
||||
#include "intel_pps.h"
|
||||
|
||||
/**
|
||||
* scale - scale values from one range to another
|
||||
@ -971,26 +972,24 @@ int intel_backlight_device_register(struct intel_connector *connector)
|
||||
if (!name)
|
||||
return -ENOMEM;
|
||||
|
||||
bd = backlight_device_register(name, connector->base.kdev, connector,
|
||||
&intel_backlight_device_ops, &props);
|
||||
|
||||
/*
|
||||
* Using the same name independent of the drm device or connector
|
||||
* prevents registration of multiple backlight devices in the
|
||||
* driver. However, we need to use the default name for backward
|
||||
* compatibility. Use unique names for subsequent backlight devices as a
|
||||
* fallback when the default name already exists.
|
||||
*/
|
||||
if (IS_ERR(bd) && PTR_ERR(bd) == -EEXIST) {
|
||||
bd = backlight_device_get_by_name(name);
|
||||
if (bd) {
|
||||
put_device(&bd->dev);
|
||||
/*
|
||||
* Using the same name independent of the drm device or connector
|
||||
* prevents registration of multiple backlight devices in the
|
||||
* driver. However, we need to use the default name for backward
|
||||
* compatibility. Use unique names for subsequent backlight devices as a
|
||||
* fallback when the default name already exists.
|
||||
*/
|
||||
kfree(name);
|
||||
name = kasprintf(GFP_KERNEL, "card%d-%s-backlight",
|
||||
i915->drm.primary->index, connector->base.name);
|
||||
if (!name)
|
||||
return -ENOMEM;
|
||||
|
||||
bd = backlight_device_register(name, connector->base.kdev, connector,
|
||||
&intel_backlight_device_ops, &props);
|
||||
}
|
||||
bd = backlight_device_register(name, connector->base.kdev, connector,
|
||||
&intel_backlight_device_ops, &props);
|
||||
|
||||
if (IS_ERR(bd)) {
|
||||
drm_err(&i915->drm,
|
||||
@ -1773,9 +1772,13 @@ void intel_backlight_init_funcs(struct intel_panel *panel)
|
||||
panel->backlight.pwm_funcs = &i9xx_pwm_funcs;
|
||||
}
|
||||
|
||||
if (connector->base.connector_type == DRM_MODE_CONNECTOR_eDP &&
|
||||
intel_dp_aux_init_backlight_funcs(connector) == 0)
|
||||
return;
|
||||
if (connector->base.connector_type == DRM_MODE_CONNECTOR_eDP) {
|
||||
if (intel_dp_aux_init_backlight_funcs(connector) == 0)
|
||||
return;
|
||||
|
||||
if (!(dev_priv->quirks & QUIRK_NO_PPS_BACKLIGHT_POWER_HOOK))
|
||||
connector->panel.backlight.power = intel_pps_backlight_power;
|
||||
}
|
||||
|
||||
/* We're using a standard PWM backlight interface */
|
||||
panel->backlight.funcs = &pwm_bl_funcs;
|
||||
|
@ -1596,6 +1596,8 @@ static void parse_dsi_backlight_ports(struct drm_i915_private *i915,
|
||||
struct intel_panel *panel,
|
||||
enum port port)
|
||||
{
|
||||
enum port port_bc = DISPLAY_VER(i915) >= 11 ? PORT_B : PORT_C;
|
||||
|
||||
if (!panel->vbt.dsi.config->dual_link || i915->vbt.version < 197) {
|
||||
panel->vbt.dsi.bl_ports = BIT(port);
|
||||
if (panel->vbt.dsi.config->cabc_supported)
|
||||
@ -1609,11 +1611,11 @@ static void parse_dsi_backlight_ports(struct drm_i915_private *i915,
|
||||
panel->vbt.dsi.bl_ports = BIT(PORT_A);
|
||||
break;
|
||||
case DL_DCS_PORT_C:
|
||||
panel->vbt.dsi.bl_ports = BIT(PORT_C);
|
||||
panel->vbt.dsi.bl_ports = BIT(port_bc);
|
||||
break;
|
||||
default:
|
||||
case DL_DCS_PORT_A_AND_C:
|
||||
panel->vbt.dsi.bl_ports = BIT(PORT_A) | BIT(PORT_C);
|
||||
panel->vbt.dsi.bl_ports = BIT(PORT_A) | BIT(port_bc);
|
||||
break;
|
||||
}
|
||||
|
||||
@ -1625,12 +1627,12 @@ static void parse_dsi_backlight_ports(struct drm_i915_private *i915,
|
||||
panel->vbt.dsi.cabc_ports = BIT(PORT_A);
|
||||
break;
|
||||
case DL_DCS_PORT_C:
|
||||
panel->vbt.dsi.cabc_ports = BIT(PORT_C);
|
||||
panel->vbt.dsi.cabc_ports = BIT(port_bc);
|
||||
break;
|
||||
default:
|
||||
case DL_DCS_PORT_A_AND_C:
|
||||
panel->vbt.dsi.cabc_ports =
|
||||
BIT(PORT_A) | BIT(PORT_C);
|
||||
BIT(PORT_A) | BIT(port_bc);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -404,15 +404,17 @@ static int tgl_get_bw_info(struct drm_i915_private *dev_priv, const struct intel
|
||||
int clpchgroup;
|
||||
int j;
|
||||
|
||||
if (i < num_groups - 1)
|
||||
bi_next = &dev_priv->max_bw[i + 1];
|
||||
|
||||
clpchgroup = (sa->deburst * qi.deinterleave / num_channels) << i;
|
||||
|
||||
if (i < num_groups - 1 && clpchgroup < clperchgroup)
|
||||
bi_next->num_planes = (ipqdepth - clpchgroup) / clpchgroup + 1;
|
||||
else
|
||||
bi_next->num_planes = 0;
|
||||
if (i < num_groups - 1) {
|
||||
bi_next = &dev_priv->max_bw[i + 1];
|
||||
|
||||
if (clpchgroup < clperchgroup)
|
||||
bi_next->num_planes = (ipqdepth - clpchgroup) /
|
||||
clpchgroup + 1;
|
||||
else
|
||||
bi_next->num_planes = 0;
|
||||
}
|
||||
|
||||
bi->num_qgv_points = qi.num_points;
|
||||
bi->num_psf_gv_points = qi.num_psf_points;
|
||||
|
@ -5293,8 +5293,6 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
|
||||
|
||||
intel_panel_init(intel_connector);
|
||||
|
||||
if (!(dev_priv->quirks & QUIRK_NO_PPS_BACKLIGHT_POWER_HOOK))
|
||||
intel_connector->panel.backlight.power = intel_pps_backlight_power;
|
||||
intel_backlight_setup(intel_connector, pipe);
|
||||
|
||||
intel_edp_add_properties(intel_dp);
|
||||
|
@ -191,6 +191,9 @@ static struct intel_quirk intel_quirks[] = {
|
||||
/* ASRock ITX*/
|
||||
{ 0x3185, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
|
||||
{ 0x3184, 0x1849, 0x2212, quirk_increase_ddi_disabled_time },
|
||||
/* ECS Liva Q2 */
|
||||
{ 0x3185, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
|
||||
{ 0x3184, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
|
||||
};
|
||||
|
||||
void intel_init_quirks(struct drm_i915_private *i915)
|
||||
|
@ -1933,7 +1933,14 @@ void vlv_dsi_init(struct drm_i915_private *dev_priv)
|
||||
else
|
||||
intel_dsi->ports = BIT(port);
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
|
||||
intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
|
||||
|
||||
intel_dsi->dcs_backlight_ports = intel_connector->panel.vbt.dsi.bl_ports;
|
||||
|
||||
if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
|
||||
intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
|
||||
|
||||
intel_dsi->dcs_cabc_ports = intel_connector->panel.vbt.dsi.cabc_ports;
|
||||
|
||||
/* Create a DSI host (and a device) for each port. */
|
||||
|
@ -638,9 +638,9 @@ static int emit_copy(struct i915_request *rq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int scatter_list_length(struct scatterlist *sg)
|
||||
static u64 scatter_list_length(struct scatterlist *sg)
|
||||
{
|
||||
int len = 0;
|
||||
u64 len = 0;
|
||||
|
||||
while (sg && sg_dma_len(sg)) {
|
||||
len += sg_dma_len(sg);
|
||||
@ -650,28 +650,26 @@ static int scatter_list_length(struct scatterlist *sg)
|
||||
return len;
|
||||
}
|
||||
|
||||
static void
|
||||
static int
|
||||
calculate_chunk_sz(struct drm_i915_private *i915, bool src_is_lmem,
|
||||
int *src_sz, u32 bytes_to_cpy, u32 ccs_bytes_to_cpy)
|
||||
u64 bytes_to_cpy, u64 ccs_bytes_to_cpy)
|
||||
{
|
||||
if (ccs_bytes_to_cpy) {
|
||||
if (!src_is_lmem)
|
||||
/*
|
||||
* When CHUNK_SZ is passed all the pages upto CHUNK_SZ
|
||||
* will be taken for the blt. in Flat-ccs supported
|
||||
* platform Smem obj will have more pages than required
|
||||
* for main meory hence limit it to the required size
|
||||
* for main memory
|
||||
*/
|
||||
*src_sz = min_t(int, bytes_to_cpy, CHUNK_SZ);
|
||||
} else { /* ccs handling is not required */
|
||||
*src_sz = CHUNK_SZ;
|
||||
}
|
||||
if (ccs_bytes_to_cpy && !src_is_lmem)
|
||||
/*
|
||||
* When CHUNK_SZ is passed all the pages upto CHUNK_SZ
|
||||
* will be taken for the blt. in Flat-ccs supported
|
||||
* platform Smem obj will have more pages than required
|
||||
* for main meory hence limit it to the required size
|
||||
* for main memory
|
||||
*/
|
||||
return min_t(u64, bytes_to_cpy, CHUNK_SZ);
|
||||
else
|
||||
return CHUNK_SZ;
|
||||
}
|
||||
|
||||
static void get_ccs_sg_sgt(struct sgt_dma *it, u32 bytes_to_cpy)
|
||||
static void get_ccs_sg_sgt(struct sgt_dma *it, u64 bytes_to_cpy)
|
||||
{
|
||||
u32 len;
|
||||
u64 len;
|
||||
|
||||
do {
|
||||
GEM_BUG_ON(!it->sg || !sg_dma_len(it->sg));
|
||||
@ -702,12 +700,12 @@ intel_context_migrate_copy(struct intel_context *ce,
|
||||
{
|
||||
struct sgt_dma it_src = sg_sgt(src), it_dst = sg_sgt(dst), it_ccs;
|
||||
struct drm_i915_private *i915 = ce->engine->i915;
|
||||
u32 ccs_bytes_to_cpy = 0, bytes_to_cpy;
|
||||
u64 ccs_bytes_to_cpy = 0, bytes_to_cpy;
|
||||
enum i915_cache_level ccs_cache_level;
|
||||
u32 src_offset, dst_offset;
|
||||
u8 src_access, dst_access;
|
||||
struct i915_request *rq;
|
||||
int src_sz, dst_sz;
|
||||
u64 src_sz, dst_sz;
|
||||
bool ccs_is_src, overwrite_ccs;
|
||||
int err;
|
||||
|
||||
@ -790,8 +788,8 @@ intel_context_migrate_copy(struct intel_context *ce,
|
||||
if (err)
|
||||
goto out_rq;
|
||||
|
||||
calculate_chunk_sz(i915, src_is_lmem, &src_sz,
|
||||
bytes_to_cpy, ccs_bytes_to_cpy);
|
||||
src_sz = calculate_chunk_sz(i915, src_is_lmem,
|
||||
bytes_to_cpy, ccs_bytes_to_cpy);
|
||||
|
||||
len = emit_pte(rq, &it_src, src_cache_level, src_is_lmem,
|
||||
src_offset, src_sz);
|
||||
|
@ -4026,6 +4026,13 @@ static inline void guc_init_lrc_mapping(struct intel_guc *guc)
|
||||
/* make sure all descriptors are clean... */
|
||||
xa_destroy(&guc->context_lookup);
|
||||
|
||||
/*
|
||||
* A reset might have occurred while we had a pending stalled request,
|
||||
* so make sure we clean that up.
|
||||
*/
|
||||
guc->stalled_request = NULL;
|
||||
guc->submission_stall_reason = STALL_NONE;
|
||||
|
||||
/*
|
||||
* Some contexts might have been pinned before we enabled GuC
|
||||
* submission, so we need to add them to the GuC bookeeping.
|
||||
|
@ -298,7 +298,7 @@ static int alloc_resource(struct intel_vgpu *vgpu,
|
||||
}
|
||||
|
||||
/**
|
||||
* inte_gvt_free_vgpu_resource - free HW resource owned by a vGPU
|
||||
* intel_vgpu_free_resource() - free HW resource owned by a vGPU
|
||||
* @vgpu: a vGPU
|
||||
*
|
||||
* This function is used to free the HW resource owned by a vGPU.
|
||||
@ -328,7 +328,7 @@ void intel_vgpu_reset_resource(struct intel_vgpu *vgpu)
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_alloc_vgpu_resource - allocate HW resource for a vGPU
|
||||
* intel_vgpu_alloc_resource() - allocate HW resource for a vGPU
|
||||
* @vgpu: vGPU
|
||||
* @param: vGPU creation params
|
||||
*
|
||||
|
@ -2341,7 +2341,7 @@ static int emulate_ggtt_mmio_write(struct intel_vgpu *vgpu, unsigned int off,
|
||||
gvt_vgpu_err("fail to populate guest ggtt entry\n");
|
||||
/* guest driver may read/write the entry when partial
|
||||
* update the entry in this situation p2m will fail
|
||||
* settting the shadow entry to point to a scratch page
|
||||
* setting the shadow entry to point to a scratch page
|
||||
*/
|
||||
ops->set_pfn(&m, gvt->gtt.scratch_mfn);
|
||||
} else
|
||||
|
@ -905,7 +905,7 @@ static int update_fdi_rx_iir_status(struct intel_vgpu *vgpu,
|
||||
else if (FDI_RX_IMR_TO_PIPE(offset) != INVALID_INDEX)
|
||||
index = FDI_RX_IMR_TO_PIPE(offset);
|
||||
else {
|
||||
gvt_vgpu_err("Unsupport registers %x\n", offset);
|
||||
gvt_vgpu_err("Unsupported registers %x\n", offset);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -3052,7 +3052,7 @@ int intel_vgpu_default_mmio_read(struct intel_vgpu *vgpu, unsigned int offset,
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_t_default_mmio_write - default MMIO write handler
|
||||
* intel_vgpu_default_mmio_write() - default MMIO write handler
|
||||
* @vgpu: a vGPU
|
||||
* @offset: access offset
|
||||
* @p_data: write data buffer
|
||||
|
@ -546,7 +546,7 @@ static void switch_mmio(struct intel_vgpu *pre,
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_gvt_switch_render_mmio - switch mmio context of specific engine
|
||||
* intel_gvt_switch_mmio - switch mmio context of specific engine
|
||||
* @pre: the last vGPU that own the engine
|
||||
* @next: the vGPU to switch to
|
||||
* @engine: the engine
|
||||
|
@ -1076,7 +1076,8 @@ static int iterate_skl_plus_mmio(struct intel_gvt_mmio_table_iter *iter)
|
||||
MMIO_D(GEN8_HDC_CHICKEN1);
|
||||
MMIO_D(GEN9_WM_CHICKEN3);
|
||||
|
||||
if (IS_KABYLAKE(dev_priv) || IS_COFFEELAKE(dev_priv))
|
||||
if (IS_KABYLAKE(dev_priv) ||
|
||||
IS_COFFEELAKE(dev_priv) || IS_COMETLAKE(dev_priv))
|
||||
MMIO_D(GAMT_CHKN_BIT_REG);
|
||||
if (!IS_BROXTON(dev_priv))
|
||||
MMIO_D(GEN9_CTX_PREEMPT_REG);
|
||||
|
@ -6561,7 +6561,10 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
|
||||
enum plane_id plane_id;
|
||||
u8 slices;
|
||||
|
||||
skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
|
||||
memset(&crtc_state->wm.skl.optimal, 0,
|
||||
sizeof(crtc_state->wm.skl.optimal));
|
||||
if (crtc_state->hw.active)
|
||||
skl_pipe_wm_get_hw_state(crtc, &crtc_state->wm.skl.optimal);
|
||||
crtc_state->wm.skl.raw = crtc_state->wm.skl.optimal;
|
||||
|
||||
memset(&dbuf_state->ddb[pipe], 0, sizeof(dbuf_state->ddb[pipe]));
|
||||
@ -6572,6 +6575,9 @@ void skl_wm_get_hw_state(struct drm_i915_private *dev_priv)
|
||||
struct skl_ddb_entry *ddb_y =
|
||||
&crtc_state->wm.skl.plane_ddb_y[plane_id];
|
||||
|
||||
if (!crtc_state->hw.active)
|
||||
continue;
|
||||
|
||||
skl_ddb_get_hw_plane_state(dev_priv, crtc->pipe,
|
||||
plane_id, ddb, ddb_y);
|
||||
|
||||
|
@ -2061,6 +2061,12 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc)
|
||||
|
||||
intf_cfg.stream_sel = 0; /* Don't care value for video mode */
|
||||
intf_cfg.mode_3d = dpu_encoder_helper_get_3d_blend_mode(phys_enc);
|
||||
|
||||
if (phys_enc->hw_intf)
|
||||
intf_cfg.intf = phys_enc->hw_intf->idx;
|
||||
if (phys_enc->hw_wb)
|
||||
intf_cfg.wb = phys_enc->hw_wb->idx;
|
||||
|
||||
if (phys_enc->hw_pp->merge_3d)
|
||||
intf_cfg.merge_3d = phys_enc->hw_pp->merge_3d->idx;
|
||||
|
||||
|
@ -1214,7 +1214,7 @@ static int dp_ctrl_link_train_2(struct dp_ctrl_private *ctrl,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dp_ctrl_train_pattern_set(ctrl, pattern | DP_RECOVERED_CLOCK_OUT_EN);
|
||||
dp_ctrl_train_pattern_set(ctrl, pattern);
|
||||
|
||||
for (tries = 0; tries <= maximum_retries; tries++) {
|
||||
drm_dp_link_train_channel_eq_delay(ctrl->aux, ctrl->panel->dpcd);
|
||||
|
@ -109,7 +109,7 @@ static const char * const dsi_8996_bus_clk_names[] = {
|
||||
static const struct msm_dsi_config msm8996_dsi_cfg = {
|
||||
.io_offset = DSI_6G_REG_SHIFT,
|
||||
.reg_cfg = {
|
||||
.num = 2,
|
||||
.num = 3,
|
||||
.regs = {
|
||||
{"vdda", 18160, 1 }, /* 1.25 V */
|
||||
{"vcca", 17000, 32 }, /* 0.925 V */
|
||||
@ -148,7 +148,7 @@ static const char * const dsi_sdm660_bus_clk_names[] = {
|
||||
static const struct msm_dsi_config sdm660_dsi_cfg = {
|
||||
.io_offset = DSI_6G_REG_SHIFT,
|
||||
.reg_cfg = {
|
||||
.num = 2,
|
||||
.num = 1,
|
||||
.regs = {
|
||||
{"vdda", 12560, 4 }, /* 1.2 V */
|
||||
},
|
||||
|
@ -347,7 +347,7 @@ int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing,
|
||||
} else {
|
||||
timing->shared_timings.clk_pre =
|
||||
linear_inter(tmax, tmin, pcnt2, 0, false);
|
||||
timing->shared_timings.clk_pre_inc_by_2 = 0;
|
||||
timing->shared_timings.clk_pre_inc_by_2 = 0;
|
||||
}
|
||||
|
||||
timing->ta_go = 3;
|
||||
|
@ -469,6 +469,8 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
|
||||
}
|
||||
}
|
||||
|
||||
drm_helper_move_panel_connectors_to_head(ddev);
|
||||
|
||||
ddev->mode_config.funcs = &mode_config_funcs;
|
||||
ddev->mode_config.helper_private = &mode_config_helper_funcs;
|
||||
|
||||
|
@ -213,6 +213,8 @@ void msm_devfreq_init(struct msm_gpu *gpu)
|
||||
|
||||
if (IS_ERR(df->devfreq)) {
|
||||
DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize GPU devfreq\n");
|
||||
dev_pm_qos_remove_request(&df->idle_freq);
|
||||
dev_pm_qos_remove_request(&df->boost_freq);
|
||||
df->devfreq = NULL;
|
||||
return;
|
||||
}
|
||||
|
@ -196,6 +196,9 @@ static int rd_open(struct inode *inode, struct file *file)
|
||||
file->private_data = rd;
|
||||
rd->open = true;
|
||||
|
||||
/* Reset fifo to clear any previously unread data: */
|
||||
rd->fifo.head = rd->fifo.tail = 0;
|
||||
|
||||
/* the parsing tools need to know gpu-id to know which
|
||||
* register database to load.
|
||||
*
|
||||
|
@ -288,11 +288,29 @@ int amd_sfh_irq_init(struct amd_mp2_dev *privdata)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dmi_system_id dmi_nodevs[] = {
|
||||
{
|
||||
/*
|
||||
* Google Chromebooks use Chrome OS Embedded Controller Sensor
|
||||
* Hub instead of Sensor Hub Fusion and leaves MP2
|
||||
* uninitialized, which disables all functionalities, even
|
||||
* including the registers necessary for feature detections.
|
||||
*/
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Google"),
|
||||
},
|
||||
},
|
||||
{ }
|
||||
};
|
||||
|
||||
static int amd_mp2_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
{
|
||||
struct amd_mp2_dev *privdata;
|
||||
int rc;
|
||||
|
||||
if (dmi_first_match(dmi_nodevs))
|
||||
return -ENODEV;
|
||||
|
||||
privdata = devm_kzalloc(&pdev->dev, sizeof(*privdata), GFP_KERNEL);
|
||||
if (!privdata)
|
||||
return -ENOMEM;
|
||||
|
@ -1212,6 +1212,13 @@ static __u8 *asus_report_fixup(struct hid_device *hdev, __u8 *rdesc,
|
||||
rdesc = new_rdesc;
|
||||
}
|
||||
|
||||
if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD &&
|
||||
*rsize == 331 && rdesc[190] == 0x85 && rdesc[191] == 0x5a &&
|
||||
rdesc[204] == 0x95 && rdesc[205] == 0x05) {
|
||||
hid_info(hdev, "Fixing up Asus N-KEY keyb report descriptor\n");
|
||||
rdesc[205] = 0x01;
|
||||
}
|
||||
|
||||
return rdesc;
|
||||
}
|
||||
|
||||
|
@ -185,6 +185,8 @@
|
||||
#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021 0x029c
|
||||
#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021 0x029a
|
||||
#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021 0x029f
|
||||
#define USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT 0x8102
|
||||
#define USB_DEVICE_ID_APPLE_TOUCHBAR_DISPLAY 0x8302
|
||||
|
||||
#define USB_VENDOR_ID_ASUS 0x0486
|
||||
#define USB_DEVICE_ID_ASUS_T91MT 0x0185
|
||||
@ -414,6 +416,7 @@
|
||||
#define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
|
||||
#define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A
|
||||
#define I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN 0x2A1C
|
||||
#define I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN 0x279F
|
||||
|
||||
#define USB_VENDOR_ID_ELECOM 0x056e
|
||||
#define USB_DEVICE_ID_ELECOM_BM084 0x0061
|
||||
|
@ -383,6 +383,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
|
||||
HID_BATTERY_QUIRK_IGNORE },
|
||||
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN),
|
||||
HID_BATTERY_QUIRK_IGNORE },
|
||||
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN),
|
||||
HID_BATTERY_QUIRK_IGNORE },
|
||||
{}
|
||||
};
|
||||
|
||||
@ -1532,7 +1534,10 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
|
||||
* assume ours
|
||||
*/
|
||||
if (!report->tool)
|
||||
hid_report_set_tool(report, input, usage->code);
|
||||
report->tool = usage->code;
|
||||
|
||||
/* drivers may have changed the value behind our back, resend it */
|
||||
hid_report_set_tool(report, input, report->tool);
|
||||
} else {
|
||||
hid_report_release_tool(report, input, usage->code);
|
||||
}
|
||||
|
@ -1221,6 +1221,7 @@ static void joycon_parse_report(struct joycon_ctlr *ctlr,
|
||||
|
||||
spin_lock_irqsave(&ctlr->lock, flags);
|
||||
if (IS_ENABLED(CONFIG_NINTENDO_FF) && rep->vibrator_report &&
|
||||
ctlr->ctlr_state != JOYCON_CTLR_STATE_REMOVED &&
|
||||
(msecs - ctlr->rumble_msecs) >= JC_RUMBLE_PERIOD_MS &&
|
||||
(ctlr->rumble_queue_head != ctlr->rumble_queue_tail ||
|
||||
ctlr->rumble_zero_countdown > 0)) {
|
||||
@ -1545,12 +1546,13 @@ static int joycon_set_rumble(struct joycon_ctlr *ctlr, u16 amp_r, u16 amp_l,
|
||||
ctlr->rumble_queue_head = 0;
|
||||
memcpy(ctlr->rumble_data[ctlr->rumble_queue_head], data,
|
||||
JC_RUMBLE_DATA_SIZE);
|
||||
spin_unlock_irqrestore(&ctlr->lock, flags);
|
||||
|
||||
/* don't wait for the periodic send (reduces latency) */
|
||||
if (schedule_now)
|
||||
if (schedule_now && ctlr->ctlr_state != JOYCON_CTLR_STATE_REMOVED)
|
||||
queue_work(ctlr->rumble_queue, &ctlr->rumble_worker);
|
||||
|
||||
spin_unlock_irqrestore(&ctlr->lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -314,6 +314,8 @@ static const struct hid_device_id hid_have_special_driver[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT) },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_TOUCHBAR_DISPLAY) },
|
||||
#endif
|
||||
#if IS_ENABLED(CONFIG_HID_APPLEIR)
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_IRCONTROL) },
|
||||
|
@ -134,6 +134,11 @@ static int steam_recv_report(struct steam_device *steam,
|
||||
int ret;
|
||||
|
||||
r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0];
|
||||
if (!r) {
|
||||
hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted - nothing to read\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (hid_report_len(r) < 64)
|
||||
return -EINVAL;
|
||||
|
||||
@ -165,6 +170,11 @@ static int steam_send_report(struct steam_device *steam,
|
||||
int ret;
|
||||
|
||||
r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0];
|
||||
if (!r) {
|
||||
hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted - nothing to read\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (hid_report_len(r) < 64)
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -67,12 +67,13 @@ static const struct tm_wheel_info tm_wheels_infos[] = {
|
||||
{0x0200, 0x0005, "Thrustmaster T300RS (Missing Attachment)"},
|
||||
{0x0206, 0x0005, "Thrustmaster T300RS"},
|
||||
{0x0209, 0x0005, "Thrustmaster T300RS (Open Wheel Attachment)"},
|
||||
{0x020a, 0x0005, "Thrustmaster T300RS (Sparco R383 Mod)"},
|
||||
{0x0204, 0x0005, "Thrustmaster T300 Ferrari Alcantara Edition"},
|
||||
{0x0002, 0x0002, "Thrustmaster T500RS"}
|
||||
//{0x0407, 0x0001, "Thrustmaster TMX"}
|
||||
};
|
||||
|
||||
static const uint8_t tm_wheels_infos_length = 4;
|
||||
static const uint8_t tm_wheels_infos_length = 7;
|
||||
|
||||
/*
|
||||
* This structs contains (in little endian) the response data
|
||||
|
@ -350,6 +350,8 @@ static int hidraw_release(struct inode * inode, struct file * file)
|
||||
down_write(&minors_rwsem);
|
||||
|
||||
spin_lock_irqsave(&hidraw_table[minor]->list_lock, flags);
|
||||
for (int i = list->tail; i < list->head; i++)
|
||||
kfree(list->buffer[i].value);
|
||||
list_del(&list->node);
|
||||
spin_unlock_irqrestore(&hidraw_table[minor]->list_lock, flags);
|
||||
kfree(list);
|
||||
|
@ -32,6 +32,7 @@
|
||||
#define ADL_P_DEVICE_ID 0x51FC
|
||||
#define ADL_N_DEVICE_ID 0x54FC
|
||||
#define RPL_S_DEVICE_ID 0x7A78
|
||||
#define MTL_P_DEVICE_ID 0x7E45
|
||||
|
||||
#define REVISION_ID_CHT_A0 0x6
|
||||
#define REVISION_ID_CHT_Ax_SI 0x0
|
||||
|
@ -43,6 +43,7 @@ static const struct pci_device_id ish_pci_tbl[] = {
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, ADL_P_DEVICE_ID)},
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, ADL_N_DEVICE_ID)},
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, RPL_S_DEVICE_ID)},
|
||||
{PCI_DEVICE(PCI_VENDOR_ID_INTEL, MTL_P_DEVICE_ID)},
|
||||
{0, }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
|
||||
|
@ -105,7 +105,7 @@ struct report_list {
|
||||
* @multi_packet_cnt: Count of fragmented packet count
|
||||
*
|
||||
* This structure is used to store completion flags and per client data like
|
||||
* like report description, number of HID devices etc.
|
||||
* report description, number of HID devices etc.
|
||||
*/
|
||||
struct ishtp_cl_data {
|
||||
/* completion flags */
|
||||
|
@ -626,13 +626,14 @@ static void ishtp_cl_read_complete(struct ishtp_cl_rb *rb)
|
||||
}
|
||||
|
||||
/**
|
||||
* ipc_tx_callback() - IPC tx callback function
|
||||
* ipc_tx_send() - IPC tx send function
|
||||
* @prm: Pointer to client device instance
|
||||
*
|
||||
* Send message over IPC either first time or on callback on previous message
|
||||
* completion
|
||||
* Send message over IPC. Message will be split into fragments
|
||||
* if message size is bigger than IPC FIFO size, and all
|
||||
* fragments will be sent one by one.
|
||||
*/
|
||||
static void ipc_tx_callback(void *prm)
|
||||
static void ipc_tx_send(void *prm)
|
||||
{
|
||||
struct ishtp_cl *cl = prm;
|
||||
struct ishtp_cl_tx_ring *cl_msg;
|
||||
@ -677,32 +678,41 @@ static void ipc_tx_callback(void *prm)
|
||||
list);
|
||||
rem = cl_msg->send_buf.size - cl->tx_offs;
|
||||
|
||||
ishtp_hdr.host_addr = cl->host_client_id;
|
||||
ishtp_hdr.fw_addr = cl->fw_client_id;
|
||||
ishtp_hdr.reserved = 0;
|
||||
pmsg = cl_msg->send_buf.data + cl->tx_offs;
|
||||
while (rem > 0) {
|
||||
ishtp_hdr.host_addr = cl->host_client_id;
|
||||
ishtp_hdr.fw_addr = cl->fw_client_id;
|
||||
ishtp_hdr.reserved = 0;
|
||||
pmsg = cl_msg->send_buf.data + cl->tx_offs;
|
||||
|
||||
if (rem <= dev->mtu) {
|
||||
ishtp_hdr.length = rem;
|
||||
ishtp_hdr.msg_complete = 1;
|
||||
cl->sending = 0;
|
||||
list_del_init(&cl_msg->list); /* Must be before write */
|
||||
spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
|
||||
/* Submit to IPC queue with no callback */
|
||||
ishtp_write_message(dev, &ishtp_hdr, pmsg);
|
||||
spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags);
|
||||
list_add_tail(&cl_msg->list, &cl->tx_free_list.list);
|
||||
++cl->tx_ring_free_size;
|
||||
spin_unlock_irqrestore(&cl->tx_free_list_spinlock,
|
||||
tx_free_flags);
|
||||
} else {
|
||||
/* Send IPC fragment */
|
||||
spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
|
||||
cl->tx_offs += dev->mtu;
|
||||
ishtp_hdr.length = dev->mtu;
|
||||
ishtp_hdr.msg_complete = 0;
|
||||
ishtp_send_msg(dev, &ishtp_hdr, pmsg, ipc_tx_callback, cl);
|
||||
if (rem <= dev->mtu) {
|
||||
/* Last fragment or only one packet */
|
||||
ishtp_hdr.length = rem;
|
||||
ishtp_hdr.msg_complete = 1;
|
||||
/* Submit to IPC queue with no callback */
|
||||
ishtp_write_message(dev, &ishtp_hdr, pmsg);
|
||||
cl->tx_offs = 0;
|
||||
cl->sending = 0;
|
||||
|
||||
break;
|
||||
} else {
|
||||
/* Send ipc fragment */
|
||||
ishtp_hdr.length = dev->mtu;
|
||||
ishtp_hdr.msg_complete = 0;
|
||||
/* All fregments submitted to IPC queue with no callback */
|
||||
ishtp_write_message(dev, &ishtp_hdr, pmsg);
|
||||
cl->tx_offs += dev->mtu;
|
||||
rem = cl_msg->send_buf.size - cl->tx_offs;
|
||||
}
|
||||
}
|
||||
|
||||
list_del_init(&cl_msg->list);
|
||||
spin_unlock_irqrestore(&cl->tx_list_spinlock, tx_flags);
|
||||
|
||||
spin_lock_irqsave(&cl->tx_free_list_spinlock, tx_free_flags);
|
||||
list_add_tail(&cl_msg->list, &cl->tx_free_list.list);
|
||||
++cl->tx_ring_free_size;
|
||||
spin_unlock_irqrestore(&cl->tx_free_list_spinlock,
|
||||
tx_free_flags);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -720,7 +730,7 @@ static void ishtp_cl_send_msg_ipc(struct ishtp_device *dev,
|
||||
return;
|
||||
|
||||
cl->tx_offs = 0;
|
||||
ipc_tx_callback(cl);
|
||||
ipc_tx_send(cl);
|
||||
++cl->send_msg_cnt_ipc;
|
||||
}
|
||||
|
||||
|
@ -287,10 +287,8 @@ static int ad7292_probe(struct spi_device *spi)
|
||||
|
||||
ret = devm_add_action_or_reset(&spi->dev,
|
||||
ad7292_regulator_disable, st);
|
||||
if (ret) {
|
||||
regulator_disable(st->reg);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = regulator_get_voltage(st->reg);
|
||||
if (ret < 0)
|
||||
|
@ -40,8 +40,8 @@
|
||||
#define MCP3911_CHANNEL(x) (MCP3911_REG_CHANNEL0 + x * 3)
|
||||
#define MCP3911_OFFCAL(x) (MCP3911_REG_OFFCAL_CH0 + x * 6)
|
||||
|
||||
/* Internal voltage reference in uV */
|
||||
#define MCP3911_INT_VREF_UV 1200000
|
||||
/* Internal voltage reference in mV */
|
||||
#define MCP3911_INT_VREF_MV 1200
|
||||
|
||||
#define MCP3911_REG_READ(reg, id) ((((reg) << 1) | ((id) << 5) | (1 << 0)) & 0xff)
|
||||
#define MCP3911_REG_WRITE(reg, id) ((((reg) << 1) | ((id) << 5) | (0 << 0)) & 0xff)
|
||||
@ -113,6 +113,8 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
*val = sign_extend32(*val, 23);
|
||||
|
||||
ret = IIO_VAL_INT;
|
||||
break;
|
||||
|
||||
@ -137,11 +139,18 @@ static int mcp3911_read_raw(struct iio_dev *indio_dev,
|
||||
|
||||
*val = ret / 1000;
|
||||
} else {
|
||||
*val = MCP3911_INT_VREF_UV;
|
||||
*val = MCP3911_INT_VREF_MV;
|
||||
}
|
||||
|
||||
*val2 = 24;
|
||||
ret = IIO_VAL_FRACTIONAL_LOG2;
|
||||
/*
|
||||
* For 24bit Conversion
|
||||
* Raw = ((Voltage)/(Vref) * 2^23 * Gain * 1.5
|
||||
* Voltage = Raw * (Vref)/(2^23 * Gain * 1.5)
|
||||
*/
|
||||
|
||||
/* val2 = (2^23 * 1.5) */
|
||||
*val2 = 12582912;
|
||||
ret = IIO_VAL_FRACTIONAL;
|
||||
break;
|
||||
}
|
||||
|
||||
@ -208,7 +217,14 @@ static int mcp3911_config(struct mcp3911 *adc)
|
||||
u32 configreg;
|
||||
int ret;
|
||||
|
||||
device_property_read_u32(dev, "device-addr", &adc->dev_addr);
|
||||
ret = device_property_read_u32(dev, "microchip,device-addr", &adc->dev_addr);
|
||||
|
||||
/*
|
||||
* Fallback to "device-addr" due to historical mismatch between
|
||||
* dt-bindings and implementation
|
||||
*/
|
||||
if (ret)
|
||||
device_property_read_u32(dev, "device-addr", &adc->dev_addr);
|
||||
if (adc->dev_addr > 3) {
|
||||
dev_err(&adc->spi->dev,
|
||||
"invalid device address (%i). Must be in range 0-3.\n",
|
||||
|
@ -505,7 +505,7 @@ static int cm32181_resume(struct device *dev)
|
||||
cm32181->conf_regs[CM32181_REG_ADDR_CMD]);
|
||||
}
|
||||
|
||||
DEFINE_SIMPLE_DEV_PM_OPS(cm32181_pm_ops, cm32181_suspend, cm32181_resume);
|
||||
static DEFINE_SIMPLE_DEV_PM_OPS(cm32181_pm_ops, cm32181_suspend, cm32181_resume);
|
||||
|
||||
static const struct of_device_id cm32181_of_match[] = {
|
||||
{ .compatible = "capella,cm3218" },
|
||||
|
@ -226,8 +226,10 @@ static int cm3605_probe(struct platform_device *pdev)
|
||||
}
|
||||
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
if (irq < 0)
|
||||
return dev_err_probe(dev, irq, "failed to get irq\n");
|
||||
if (irq < 0) {
|
||||
ret = dev_err_probe(dev, irq, "failed to get irq\n");
|
||||
goto out_disable_aset;
|
||||
}
|
||||
|
||||
ret = devm_request_threaded_irq(dev, irq, cm3605_prox_irq,
|
||||
NULL, 0, "cm3605", indio_dev);
|
||||
|
@ -40,7 +40,7 @@ static int of_iommu_xlate(struct device *dev,
|
||||
* a proper probe-ordering dependency mechanism in future.
|
||||
*/
|
||||
if (!ops)
|
||||
return -ENODEV;
|
||||
return driver_deferred_probe_check_state(dev);
|
||||
|
||||
if (!try_module_get(ops->owner))
|
||||
return -ENODEV;
|
||||
|
@ -1416,42 +1416,37 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
|
||||
{
|
||||
int ret;
|
||||
struct device *dev = ir->dev;
|
||||
char *data;
|
||||
|
||||
data = kzalloc(USB_CTRL_MSG_SZ, GFP_KERNEL);
|
||||
if (!data) {
|
||||
dev_err(dev, "%s: memory allocation failed!", __func__);
|
||||
return;
|
||||
}
|
||||
char data[USB_CTRL_MSG_SZ];
|
||||
|
||||
/*
|
||||
* This is a strange one. Windows issues a set address to the device
|
||||
* on the receive control pipe and expect a certain value pair back
|
||||
*/
|
||||
ret = usb_control_msg(ir->usbdev, usb_rcvctrlpipe(ir->usbdev, 0),
|
||||
USB_REQ_SET_ADDRESS, USB_TYPE_VENDOR, 0, 0,
|
||||
data, USB_CTRL_MSG_SZ, 3000);
|
||||
ret = usb_control_msg_recv(ir->usbdev, 0, USB_REQ_SET_ADDRESS,
|
||||
USB_DIR_IN | USB_TYPE_VENDOR,
|
||||
0, 0, data, USB_CTRL_MSG_SZ, 3000,
|
||||
GFP_KERNEL);
|
||||
dev_dbg(dev, "set address - ret = %d", ret);
|
||||
dev_dbg(dev, "set address - data[0] = %d, data[1] = %d",
|
||||
data[0], data[1]);
|
||||
|
||||
/* set feature: bit rate 38400 bps */
|
||||
ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
|
||||
USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
|
||||
0xc04e, 0x0000, NULL, 0, 3000);
|
||||
ret = usb_control_msg_send(ir->usbdev, 0,
|
||||
USB_REQ_SET_FEATURE, USB_TYPE_VENDOR,
|
||||
0xc04e, 0x0000, NULL, 0, 3000, GFP_KERNEL);
|
||||
|
||||
dev_dbg(dev, "set feature - ret = %d", ret);
|
||||
|
||||
/* bRequest 4: set char length to 8 bits */
|
||||
ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
|
||||
4, USB_TYPE_VENDOR,
|
||||
0x0808, 0x0000, NULL, 0, 3000);
|
||||
ret = usb_control_msg_send(ir->usbdev, 0,
|
||||
4, USB_TYPE_VENDOR,
|
||||
0x0808, 0x0000, NULL, 0, 3000, GFP_KERNEL);
|
||||
dev_dbg(dev, "set char length - retB = %d", ret);
|
||||
|
||||
/* bRequest 2: set handshaking to use DTR/DSR */
|
||||
ret = usb_control_msg(ir->usbdev, usb_sndctrlpipe(ir->usbdev, 0),
|
||||
2, USB_TYPE_VENDOR,
|
||||
0x0000, 0x0100, NULL, 0, 3000);
|
||||
ret = usb_control_msg_send(ir->usbdev, 0,
|
||||
2, USB_TYPE_VENDOR,
|
||||
0x0000, 0x0100, NULL, 0, 3000, GFP_KERNEL);
|
||||
dev_dbg(dev, "set handshake - retC = %d", ret);
|
||||
|
||||
/* device resume */
|
||||
@ -1459,8 +1454,6 @@ static void mceusb_gen1_init(struct mceusb_dev *ir)
|
||||
|
||||
/* get hw/sw revision? */
|
||||
mce_command_out(ir, GET_REVISION, sizeof(GET_REVISION));
|
||||
|
||||
kfree(data);
|
||||
}
|
||||
|
||||
static void mceusb_gen2_init(struct mceusb_dev *ir)
|
||||
|
@ -25,7 +25,7 @@
|
||||
#define SDSP_DOMAIN_ID (2)
|
||||
#define CDSP_DOMAIN_ID (3)
|
||||
#define FASTRPC_DEV_MAX 4 /* adsp, mdsp, slpi, cdsp*/
|
||||
#define FASTRPC_MAX_SESSIONS 13 /*12 compute, 1 cpz*/
|
||||
#define FASTRPC_MAX_SESSIONS 14
|
||||
#define FASTRPC_MAX_VMIDS 16
|
||||
#define FASTRPC_ALIGN 128
|
||||
#define FASTRPC_MAX_FDLIST 16
|
||||
@ -1943,7 +1943,12 @@ static int fastrpc_cb_probe(struct platform_device *pdev)
|
||||
of_property_read_u32(dev->of_node, "qcom,nsessions", &sessions);
|
||||
|
||||
spin_lock_irqsave(&cctx->lock, flags);
|
||||
sess = &cctx->session[cctx->sesscount];
|
||||
if (cctx->sesscount >= FASTRPC_MAX_SESSIONS) {
|
||||
dev_err(&pdev->dev, "too many sessions\n");
|
||||
spin_unlock_irqrestore(&cctx->lock, flags);
|
||||
return -ENOSPC;
|
||||
}
|
||||
sess = &cctx->session[cctx->sesscount++];
|
||||
sess->used = false;
|
||||
sess->valid = true;
|
||||
sess->dev = dev;
|
||||
@ -1956,13 +1961,12 @@ static int fastrpc_cb_probe(struct platform_device *pdev)
|
||||
struct fastrpc_session_ctx *dup_sess;
|
||||
|
||||
for (i = 1; i < sessions; i++) {
|
||||
if (cctx->sesscount++ >= FASTRPC_MAX_SESSIONS)
|
||||
if (cctx->sesscount >= FASTRPC_MAX_SESSIONS)
|
||||
break;
|
||||
dup_sess = &cctx->session[cctx->sesscount];
|
||||
dup_sess = &cctx->session[cctx->sesscount++];
|
||||
memcpy(dup_sess, sess, sizeof(*dup_sess));
|
||||
}
|
||||
}
|
||||
cctx->sesscount++;
|
||||
spin_unlock_irqrestore(&cctx->lock, flags);
|
||||
rc = dma_set_mask(dev, DMA_BIT_MASK(32));
|
||||
if (rc) {
|
||||
|
@ -949,15 +949,16 @@ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
|
||||
|
||||
/* Erase init depends on CSD and SSR */
|
||||
mmc_init_erase(card);
|
||||
|
||||
/*
|
||||
* Fetch switch information from card.
|
||||
*/
|
||||
err = mmc_read_switch(card);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Fetch switch information from card. Note, sd3_bus_mode can change if
|
||||
* voltage switch outcome changes, so do this always.
|
||||
*/
|
||||
err = mmc_read_switch(card);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/*
|
||||
* For SPI, enable CRC as appropriate.
|
||||
* This CRC enable is located AFTER the reading of the
|
||||
@ -1480,26 +1481,15 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
|
||||
if (!v18_fixup_failed && !mmc_host_is_spi(host) && mmc_host_uhs(host) &&
|
||||
mmc_sd_card_using_v18(card) &&
|
||||
host->ios.signal_voltage != MMC_SIGNAL_VOLTAGE_180) {
|
||||
/*
|
||||
* Re-read switch information in case it has changed since
|
||||
* oldcard was initialized.
|
||||
*/
|
||||
if (oldcard) {
|
||||
err = mmc_read_switch(card);
|
||||
if (err)
|
||||
goto free_card;
|
||||
}
|
||||
if (mmc_sd_card_using_v18(card)) {
|
||||
if (mmc_host_set_uhs_voltage(host) ||
|
||||
mmc_sd_init_uhs_card(card)) {
|
||||
v18_fixup_failed = true;
|
||||
mmc_power_cycle(host, ocr);
|
||||
if (!oldcard)
|
||||
mmc_remove_card(card);
|
||||
goto retry;
|
||||
}
|
||||
goto done;
|
||||
if (mmc_host_set_uhs_voltage(host) ||
|
||||
mmc_sd_init_uhs_card(card)) {
|
||||
v18_fixup_failed = true;
|
||||
mmc_power_cycle(host, ocr);
|
||||
if (!oldcard)
|
||||
mmc_remove_card(card);
|
||||
goto retry;
|
||||
}
|
||||
goto cont;
|
||||
}
|
||||
|
||||
/* Initialization sequence for UHS-I cards */
|
||||
@ -1534,7 +1524,7 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
|
||||
mmc_set_bus_width(host, MMC_BUS_WIDTH_4);
|
||||
}
|
||||
}
|
||||
|
||||
cont:
|
||||
if (!oldcard) {
|
||||
/* Read/parse the extension registers. */
|
||||
err = sd_read_ext_regs(card);
|
||||
@ -1566,7 +1556,7 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
|
||||
err = -EINVAL;
|
||||
goto free_card;
|
||||
}
|
||||
done:
|
||||
|
||||
host->card = card;
|
||||
return 0;
|
||||
|
||||
|
@ -109,6 +109,7 @@ static void xrs700x_read_port_counters(struct xrs700x *priv, int port)
|
||||
{
|
||||
struct xrs700x_port *p = &priv->ports[port];
|
||||
struct rtnl_link_stats64 stats;
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
memset(&stats, 0, sizeof(stats));
|
||||
@ -138,9 +139,9 @@ static void xrs700x_read_port_counters(struct xrs700x *priv, int port)
|
||||
*/
|
||||
stats.rx_packets += stats.multicast;
|
||||
|
||||
u64_stats_update_begin(&p->syncp);
|
||||
flags = u64_stats_update_begin_irqsave(&p->syncp);
|
||||
p->stats64 = stats;
|
||||
u64_stats_update_end(&p->syncp);
|
||||
u64_stats_update_end_irqrestore(&p->syncp, flags);
|
||||
|
||||
mutex_unlock(&p->mib_mutex);
|
||||
}
|
||||
|
@ -18076,16 +18076,20 @@ static void tg3_shutdown(struct pci_dev *pdev)
|
||||
struct net_device *dev = pci_get_drvdata(pdev);
|
||||
struct tg3 *tp = netdev_priv(dev);
|
||||
|
||||
tg3_reset_task_cancel(tp);
|
||||
|
||||
rtnl_lock();
|
||||
|
||||
netif_device_detach(dev);
|
||||
|
||||
if (netif_running(dev))
|
||||
dev_close(dev);
|
||||
|
||||
if (system_state == SYSTEM_POWER_OFF)
|
||||
tg3_power_down(tp);
|
||||
tg3_power_down(tp);
|
||||
|
||||
rtnl_unlock();
|
||||
|
||||
pci_disable_device(pdev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1919,7 +1919,7 @@ static void gmac_get_stats64(struct net_device *netdev,
|
||||
|
||||
/* Racing with RX NAPI */
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&port->rx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
|
||||
|
||||
stats->rx_packets = port->stats.rx_packets;
|
||||
stats->rx_bytes = port->stats.rx_bytes;
|
||||
@ -1931,11 +1931,11 @@ static void gmac_get_stats64(struct net_device *netdev,
|
||||
stats->rx_crc_errors = port->stats.rx_crc_errors;
|
||||
stats->rx_frame_errors = port->stats.rx_frame_errors;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
|
||||
|
||||
/* Racing with MIB and TX completion interrupts */
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&port->ir_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
|
||||
|
||||
stats->tx_errors = port->stats.tx_errors;
|
||||
stats->tx_packets = port->stats.tx_packets;
|
||||
@ -1945,15 +1945,15 @@ static void gmac_get_stats64(struct net_device *netdev,
|
||||
stats->rx_missed_errors = port->stats.rx_missed_errors;
|
||||
stats->rx_fifo_errors = port->stats.rx_fifo_errors;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
|
||||
|
||||
/* Racing with hard_start_xmit */
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&port->tx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
|
||||
|
||||
stats->tx_dropped = port->stats.tx_dropped;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
|
||||
|
||||
stats->rx_dropped += stats->rx_missed_errors;
|
||||
}
|
||||
@ -2031,18 +2031,18 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
|
||||
/* Racing with MIB interrupt */
|
||||
do {
|
||||
p = values;
|
||||
start = u64_stats_fetch_begin(&port->ir_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->ir_stats_syncp);
|
||||
|
||||
for (i = 0; i < RX_STATS_NUM; i++)
|
||||
*p++ = port->hw_stats[i];
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->ir_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->ir_stats_syncp, start));
|
||||
values = p;
|
||||
|
||||
/* Racing with RX NAPI */
|
||||
do {
|
||||
p = values;
|
||||
start = u64_stats_fetch_begin(&port->rx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->rx_stats_syncp);
|
||||
|
||||
for (i = 0; i < RX_STATUS_NUM; i++)
|
||||
*p++ = port->rx_stats[i];
|
||||
@ -2050,13 +2050,13 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
|
||||
*p++ = port->rx_csum_stats[i];
|
||||
*p++ = port->rx_napi_exits;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->rx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->rx_stats_syncp, start));
|
||||
values = p;
|
||||
|
||||
/* Racing with TX start_xmit */
|
||||
do {
|
||||
p = values;
|
||||
start = u64_stats_fetch_begin(&port->tx_stats_syncp);
|
||||
start = u64_stats_fetch_begin_irq(&port->tx_stats_syncp);
|
||||
|
||||
for (i = 0; i < TX_MAX_FRAGS; i++) {
|
||||
*values++ = port->tx_frag_stats[i];
|
||||
@ -2065,7 +2065,7 @@ static void gmac_get_ethtool_stats(struct net_device *netdev,
|
||||
*values++ = port->tx_frags_linearized;
|
||||
*values++ = port->tx_hw_csummed;
|
||||
|
||||
} while (u64_stats_fetch_retry(&port->tx_stats_syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&port->tx_stats_syncp, start));
|
||||
}
|
||||
|
||||
static int gmac_get_ksettings(struct net_device *netdev,
|
||||
|
@ -206,9 +206,9 @@ struct funeth_rxq {
|
||||
|
||||
#define FUN_QSTAT_READ(q, seq, stats_copy) \
|
||||
do { \
|
||||
seq = u64_stats_fetch_begin(&(q)->syncp); \
|
||||
seq = u64_stats_fetch_begin_irq(&(q)->syncp); \
|
||||
stats_copy = (q)->stats; \
|
||||
} while (u64_stats_fetch_retry(&(q)->syncp, (seq)))
|
||||
} while (u64_stats_fetch_retry_irq(&(q)->syncp, (seq)))
|
||||
|
||||
#define FUN_INT_NAME_LEN (IFNAMSIZ + 16)
|
||||
|
||||
|
@ -177,14 +177,14 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
struct gve_rx_ring *rx = &priv->rx[ring];
|
||||
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->rx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
|
||||
tmp_rx_pkts = rx->rpackets;
|
||||
tmp_rx_bytes = rx->rbytes;
|
||||
tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
|
||||
tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
|
||||
tmp_rx_desc_err_dropped_pkt =
|
||||
rx->rx_desc_err_dropped_pkt;
|
||||
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
|
||||
start));
|
||||
rx_pkts += tmp_rx_pkts;
|
||||
rx_bytes += tmp_rx_bytes;
|
||||
@ -198,10 +198,10 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
if (priv->tx) {
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->tx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
|
||||
tmp_tx_pkts = priv->tx[ring].pkt_done;
|
||||
tmp_tx_bytes = priv->tx[ring].bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
|
||||
start));
|
||||
tx_pkts += tmp_tx_pkts;
|
||||
tx_bytes += tmp_tx_bytes;
|
||||
@ -259,13 +259,13 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
data[i++] = rx->fill_cnt - rx->cnt;
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->rx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
|
||||
tmp_rx_bytes = rx->rbytes;
|
||||
tmp_rx_skb_alloc_fail = rx->rx_skb_alloc_fail;
|
||||
tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
|
||||
tmp_rx_desc_err_dropped_pkt =
|
||||
rx->rx_desc_err_dropped_pkt;
|
||||
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
|
||||
start));
|
||||
data[i++] = tmp_rx_bytes;
|
||||
data[i++] = rx->rx_cont_packet_cnt;
|
||||
@ -331,9 +331,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
|
||||
}
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->tx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
|
||||
tmp_tx_bytes = tx->bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
|
||||
start));
|
||||
data[i++] = tmp_tx_bytes;
|
||||
data[i++] = tx->wake_queue;
|
||||
|
@ -51,10 +51,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
|
||||
for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) {
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->rx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->rx[ring].statss);
|
||||
packets = priv->rx[ring].rpackets;
|
||||
bytes = priv->rx[ring].rbytes;
|
||||
} while (u64_stats_fetch_retry(&priv->rx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->rx[ring].statss,
|
||||
start));
|
||||
s->rx_packets += packets;
|
||||
s->rx_bytes += bytes;
|
||||
@ -64,10 +64,10 @@ static void gve_get_stats(struct net_device *dev, struct rtnl_link_stats64 *s)
|
||||
for (ring = 0; ring < priv->tx_cfg.num_queues; ring++) {
|
||||
do {
|
||||
start =
|
||||
u64_stats_fetch_begin(&priv->tx[ring].statss);
|
||||
u64_stats_fetch_begin_irq(&priv->tx[ring].statss);
|
||||
packets = priv->tx[ring].pkt_done;
|
||||
bytes = priv->tx[ring].bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[ring].statss,
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[ring].statss,
|
||||
start));
|
||||
s->tx_packets += packets;
|
||||
s->tx_bytes += bytes;
|
||||
@ -1274,9 +1274,9 @@ void gve_handle_report_stats(struct gve_priv *priv)
|
||||
}
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&priv->tx[idx].statss);
|
||||
start = u64_stats_fetch_begin_irq(&priv->tx[idx].statss);
|
||||
tx_bytes = priv->tx[idx].bytes_done;
|
||||
} while (u64_stats_fetch_retry(&priv->tx[idx].statss, start));
|
||||
} while (u64_stats_fetch_retry_irq(&priv->tx[idx].statss, start));
|
||||
stats[stats_idx++] = (struct stats) {
|
||||
.stat_name = cpu_to_be32(TX_WAKE_CNT),
|
||||
.value = cpu_to_be64(priv->tx[idx].wake_queue),
|
||||
|
@ -74,14 +74,14 @@ void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&rxq_stats->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&rxq_stats->syncp);
|
||||
stats->pkts = rxq_stats->pkts;
|
||||
stats->bytes = rxq_stats->bytes;
|
||||
stats->errors = rxq_stats->csum_errors +
|
||||
rxq_stats->other_errors;
|
||||
stats->csum_errors = rxq_stats->csum_errors;
|
||||
stats->other_errors = rxq_stats->other_errors;
|
||||
} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&rxq_stats->syncp, start));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -99,14 +99,14 @@ void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
|
||||
unsigned int start;
|
||||
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&txq_stats->syncp);
|
||||
start = u64_stats_fetch_begin_irq(&txq_stats->syncp);
|
||||
stats->pkts = txq_stats->pkts;
|
||||
stats->bytes = txq_stats->bytes;
|
||||
stats->tx_busy = txq_stats->tx_busy;
|
||||
stats->tx_wake = txq_stats->tx_wake;
|
||||
stats->tx_dropped = txq_stats->tx_dropped;
|
||||
stats->big_frags_pkts = txq_stats->big_frags_pkts;
|
||||
} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
|
||||
} while (u64_stats_fetch_retry_irq(&txq_stats->syncp, start));
|
||||
}
|
||||
|
||||
/**
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user