This is the 6.1.76 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmW64xYACgkQONu9yGCS aT7kVA/+KKlE3UFuGmV1ZmiHagHF+oRZKSk9m97F5zgfAcEHAcTnnuikzvJHuepU 4hPMsH+tTXafOJLh81bv7IH3RhHtvmQZPQyWUw7ysY9ms/7CZxjkuirxLWI3evUG lre7OiApyOPkxERBfA5f9r2D1ufXC742xcAdaXrn+GSZd4nuId5f0IbHmfdNv/MV zTt6+0qRU3TMpsUdqp0rIm/0KUXtopCDFf2fI/lIImAvN2onuiqDy+TC0FJ0ErTQ C3wTEi1j9u6l3AO51OYm57TbKj/KmVOcQdcQyskHGHbB+7nS9z29LXQyorRUKqkv KTs739kgG8GH0ZegTwPVPCx5t1SBzy8fuzI2c2MMVfNCT6rWJVS7brzeb7zDLuRT 9pSr9MnoQNYMhJ3IlPvgPHKwvpP4t2el7Z8noVTRXHDjrkC238gloHwvH78/b2ao bXO3DRKTzB4Vv/Q8YUPFmj5fhPqz5lnK6idr4r72JSlzfjxtYoPAKwYihDGxmeLN mWikAPepLqoGg/P2ztKhV/fL9TVhJB+d2YM5op/b+pUxZtYdiJODefFF1ebBbF34 sRG12htP7GV/MTkxC7Yu0h3vS3HWVHugHMBIXXUnqlOANMUbyAMEQW+xkdS/W5bd QnowcQr+DT1A5b9P1bYXB7efNiHENxo/jvuJTrzZmLioy1MPqeE= =219k -----END PGP SIGNATURE----- Merge 6.1.76 into android-6.1 Changes in 6.1.76 usb: dwc3: gadget: Refactor EP0 forced stall/restart into a separate API usb: dwc3: gadget: Queue PM runtime idle on disconnect event usb: dwc3: gadget: Handle EP0 request dequeuing properly Revert "nSVM: Check for reserved encodings of TLB_CONTROL in nested VMCB" iio: adc: ad7091r: Set alert bit in config register iio: adc: ad7091r: Allow users to configure device events ext4: allow for the last group to be marked as trimmed arm64: properly install vmlinuz.efi OPP: Pass rounded rate to _set_opp() btrfs: sysfs: validate scrub_speed_max value crypto: api - Disallow identical driver names PM: hibernate: Enforce ordering during image compression/decompression hwrng: core - Fix page fault dead lock on mmap-ed hwrng crypto: s390/aes - Fix buffer overread in CTR mode s390/vfio-ap: unpin pages on gisc registration failure PM / devfreq: Fix buffer overflow in trans_stat_show media: imx355: Enable runtime PM before registering async sub-device rpmsg: virtio: Free driver_override when rpmsg_remove() media: ov9734: Enable runtime PM before registering async sub-device s390/vfio-ap: always filter entire AP matrix s390/vfio-ap: loop over the shadow APCB when filtering guest's AP configuration s390/vfio-ap: let on_scan_complete() callback filter matrix and update guest's APCB mips: Fix max_mapnr being uninitialized on early stages bus: mhi: host: Add alignment check for event ring read pointer bus: mhi: host: Drop chan lock before queuing buffers bus: mhi: host: Add spinlock to protect WP access when queueing TREs parisc/firmware: Fix F-extend for PDC addresses parisc/power: Fix power soft-off button emulation on qemu async: Split async_schedule_node_domain() async: Introduce async_schedule_dev_nocall() iio: adc: ad7091r: Enable internal vref if external vref is not supplied dmaengine: fix NULL pointer in channel unregistration function scsi: ufs: core: Remove the ufshcd_hba_exit() call from ufshcd_async_scan() arm64: dts: qcom: sc7180: fix USB wakeup interrupt types arm64: dts: qcom: sdm845: fix USB wakeup interrupt types arm64: dts: qcom: sm8150: fix USB wakeup interrupt types arm64: dts: qcom: sc7280: fix usb_1 wakeup interrupt types arm64: dts: qcom: sdm845: fix USB DP/DM HS PHY interrupts arm64: dts: qcom: sm8150: fix USB DP/DM HS PHY interrupts lsm: new security_file_ioctl_compat() hook docs: kernel_abi.py: fix command injection scripts/get_abi: fix source path leak media: videobuf2-dma-sg: fix vmap callback mmc: core: Use mrq.sbc in close-ended ffu mmc: mmc_spi: remove custom DMA mapped buffers media: mtk-jpeg: Fix use after free bug due to error path handling in mtk_jpeg_dec_device_run arm64: Rename ARM64_WORKAROUND_2966298 rtc: cmos: Use ACPI alarm for non-Intel x86 systems too rtc: Adjust failure return code for cmos_set_alarm() rtc: mc146818-lib: Adjust failure return code for mc146818_get_time() rtc: Add support for configuring the UIP timeout for RTC reads rtc: Extend timeout for waiting for UIP to clear to 1s nouveau/vmm: don't set addr on the fail path to avoid warning ubifs: ubifs_symlink: Fix memleak of inode->i_link in error path mm/rmap: fix misplaced parenthesis of a likely() mm/sparsemem: fix race in accessing memory_section->usage rename(): fix the locking of subdirectories serial: sc16is7xx: improve regmap debugfs by using one regmap per port serial: sc16is7xx: remove wasteful static buffer in sc16is7xx_regmap_name() serial: sc16is7xx: remove global regmap from struct sc16is7xx_port serial: sc16is7xx: remove unused line structure member serial: sc16is7xx: change EFR lock to operate on each channels serial: sc16is7xx: convert from _raw_ to _noinc_ regmap functions for FIFO serial: sc16is7xx: fix invalid sc16is7xx_lines bitfield in case of probe error serial: sc16is7xx: remove obsolete loop in sc16is7xx_port_irq() serial: sc16is7xx: improve do/while loop in sc16is7xx_irq() LoongArch/smp: Call rcutree_report_cpu_starting() earlier mm: page_alloc: unreserve highatomic page blocks before oom ksmbd: set v2 lease version on lease upgrade ksmbd: fix potential circular locking issue in smb2_set_ea() ksmbd: don't increment epoch if current state and request state are same ksmbd: send lease break notification on FILE_RENAME_INFORMATION ksmbd: Add missing set_freezable() for freezable kthread Revert "drm/amd: Enable PCIe PME from D3" drm/amd/display: pbn_div need be updated for hotplug event wifi: mac80211: fix potential sta-link leak net/smc: fix illegal rmb_desc access in SMC-D connection dump tcp: make sure init the accept_queue's spinlocks once bnxt_en: Wait for FLR to complete during probe vlan: skip nested type that is not IFLA_VLAN_QOS_MAPPING llc: make llc_ui_sendmsg() more robust against bonding changes llc: Drop support for ETH_P_TR_802_2. udp: fix busy polling net: fix removing a namespace with conflicting altnames tun: fix missing dropped counter in tun_xdp_act tun: add missing rx stats accounting in tun_xdp_act net: micrel: Fix PTP frame parsing for lan8814 net/rds: Fix UBSAN: array-index-out-of-bounds in rds_cmsg_recv netfs, fscache: Prevent Oops in fscache_put_cache() tracing: Ensure visibility when inserting an element into tracing_map afs: Hide silly-rename files from userspace tcp: Add memory barrier to tcp_push() netlink: fix potential sleeping issue in mqueue_flush_file ipv6: init the accept_queue's spinlocks in inet6_create net/mlx5: DR, Use the right GVMI number for drop action net/mlx5: DR, Can't go to uplink vport on RX rule net/mlx5: Use mlx5 device constant for selecting CQ period mode for ASO net/mlx5e: Allow software parsing when IPsec crypto is enabled net/mlx5e: fix a double-free in arfs_create_groups net/mlx5e: fix a potential double-free in fs_any_create_groups rcu: Defer RCU kthreads wakeup when CPU is dying netfilter: nft_limit: reject configurations that cause integer overflow btrfs: fix infinite directory reads btrfs: set last dir index to the current last index when opening dir btrfs: refresh dir last index during a rewinddir(3) call btrfs: fix race between reading a directory and adding entries to it netfilter: nf_tables: restrict anonymous set and map names to 16 bytes netfilter: nf_tables: validate NFPROTO_* family net: stmmac: Wait a bit for the reset to take effect net: mvpp2: clear BM pool before initialization selftests: netdevsim: fix the udp_tunnel_nic test fjes: fix memleaks in fjes_hw_setup net: fec: fix the unhandled context fault from smmu nbd: always initialize struct msghdr completely btrfs: avoid copying BTRFS_ROOT_SUBVOL_DEAD flag to snapshot of subvolume being deleted btrfs: ref-verify: free ref cache before clearing mount opt btrfs: tree-checker: fix inline ref size in error messages btrfs: don't warn if discard range is not aligned to sector btrfs: defrag: reject unknown flags of btrfs_ioctl_defrag_range_args btrfs: don't abort filesystem when attempting to snapshot deleted subvolume rbd: don't move requests to the running list on errors exec: Fix error handling in begin_new_exec() wifi: iwlwifi: fix a memory corruption hv_netvsc: Calculate correct ring size when PAGE_SIZE is not 4 Kbytes netfilter: nft_chain_filter: handle NETDEV_UNREGISTER for inet/ingress basechain netfilter: nf_tables: reject QUEUE/DROP verdict parameters platform/x86: p2sb: Allow p2sb_bar() calls during PCI device probe ksmbd: fix global oob in ksmbd_nl_policy firmware: arm_scmi: Check mailbox/SMT channel for consistency xfs: read only mounts with fsopen mount API are busted gpiolib: acpi: Ignore touchpad wakeup on GPD G1619-04 cpufreq: intel_pstate: Refine computation of P-state for given frequency drm: Don't unref the same fb many times by mistake due to deadlock handling drm/bridge: nxp-ptn3460: fix i2c_master_send() error checking drm/tidss: Fix atomic_flush check drm/amd/display: Disable PSR-SU on Parade 0803 TCON again platform/x86: intel-uncore-freq: Fix types in sysfs callbacks drm/bridge: nxp-ptn3460: simplify some error checking drm/amd/display: Port DENTIST hang and TDR fixes to OTG disable W/A drm/amdgpu/pm: Fix the power source flag error erofs: get rid of the remaining kmap_atomic() erofs: fix lz4 inplace decompression media: ov13b10: Support device probe in non-zero ACPI D state media: ov13b10: Enable runtime PM before registering async sub-device bus: mhi: ep: Do not allocate event ring element on stack PM: core: Remove unnecessary (void *) conversions PM: sleep: Fix possible deadlocks in core system-wide PM code thermal: intel: hfi: Refactor enabling code into helper functions thermal: intel: hfi: Disable an HFI instance when all its CPUs go offline thermal: intel: hfi: Add syscore callbacks for system-wide PM fs/pipe: move check to pipe_has_watch_queue() pipe: wakeup wr_wait after setting max_usage ARM: dts: qcom: sdx55: fix USB wakeup interrupt types ARM: dts: samsung: exynos4210-i9100: Unconditionally enable LDO12 ARM: dts: qcom: sdx55: fix pdc '#interrupt-cells' ARM: dts: qcom: sdx55: fix USB DP/DM HS PHY interrupts ARM: dts: qcom: sdx55: fix USB SS wakeup dlm: use kernel_connect() and kernel_bind() serial: core: Provide port lock wrappers serial: sc16is7xx: Use port lock wrappers serial: sc16is7xx: fix unconditional activation of THRI interrupt btrfs: zoned: factor out prepare_allocation_zoned() btrfs: zoned: optimize hint byte for zoned allocator drm/panel-edp: drm/panel-edp: Fix AUO B116XAK01 name and timing Revert "powerpc/64s: Increase default stack size to 32KB" drm/bridge: parade-ps8640: Wait for HPD when doing an AUX transfer drm: panel-simple: add missing bus flags for Tianma tm070jvhg[30/33] drm/bridge: sii902x: Use devm_regulator_bulk_get_enable() drm/bridge: sii902x: Fix probing race issue drm/bridge: sii902x: Fix audio codec unregistration drm/bridge: parade-ps8640: Ensure bridge is suspended in .post_disable() drm/bridge: parade-ps8640: Make sure we drop the AUX mutex in the error case drm/exynos: fix accidental on-stack copy of exynos_drm_plane drm/exynos: gsc: minor fix for loop iteration in gsc_runtime_resume gpio: eic-sprd: Clear interrupt after set the interrupt type block: Move checking GENHD_FL_NO_PART to bdev_add_partition() drm/bridge: anx7625: Ensure bridge is suspended in disable() spi: bcm-qspi: fix SFDP BFPT read by usig mspi read spi: fix finalize message on error return MIPS: lantiq: register smp_ops on non-smp platforms cxl/region:Fix overflow issue in alloc_hpa() mips: Call lose_fpu(0) before initializing fcr31 in mips_set_personality_nan tick/sched: Preserve number of idle sleeps across CPU hotplug events x86/entry/ia32: Ensure s32 is sign extended to s64 serial: core: fix kernel-doc for uart_port_unlock_irqrestore() net/mlx5e: Handle hardware IPsec limits events Linux 6.1.76 Change-Id: I4725561e2ca5df042a1fe307af701e7d5e2d06c8 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
2dbddbe358
@ -52,6 +52,9 @@ Description:
|
|||||||
|
|
||||||
echo 0 > /sys/class/devfreq/.../trans_stat
|
echo 0 > /sys/class/devfreq/.../trans_stat
|
||||||
|
|
||||||
|
If the transition table is bigger than PAGE_SIZE, reading
|
||||||
|
this will return an -EFBIG error.
|
||||||
|
|
||||||
What: /sys/class/devfreq/.../available_frequencies
|
What: /sys/class/devfreq/.../available_frequencies
|
||||||
Date: October 2012
|
Date: October 2012
|
||||||
Contact: Nishanth Menon <nm@ti.com>
|
Contact: Nishanth Menon <nm@ti.com>
|
||||||
|
@ -7,5 +7,5 @@ marked to be removed at some later point in time.
|
|||||||
The description of the interface will document the reason why it is
|
The description of the interface will document the reason why it is
|
||||||
obsolete and when it can be expected to be removed.
|
obsolete and when it can be expected to be removed.
|
||||||
|
|
||||||
.. kernel-abi:: $srctree/Documentation/ABI/obsolete
|
.. kernel-abi:: ABI/obsolete
|
||||||
:rst:
|
:rst:
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
ABI removed symbols
|
ABI removed symbols
|
||||||
===================
|
===================
|
||||||
|
|
||||||
.. kernel-abi:: $srctree/Documentation/ABI/removed
|
.. kernel-abi:: ABI/removed
|
||||||
:rst:
|
:rst:
|
||||||
|
@ -10,5 +10,5 @@ for at least 2 years.
|
|||||||
Most interfaces (like syscalls) are expected to never change and always
|
Most interfaces (like syscalls) are expected to never change and always
|
||||||
be available.
|
be available.
|
||||||
|
|
||||||
.. kernel-abi:: $srctree/Documentation/ABI/stable
|
.. kernel-abi:: ABI/stable
|
||||||
:rst:
|
:rst:
|
||||||
|
@ -16,5 +16,5 @@ Programs that use these interfaces are strongly encouraged to add their
|
|||||||
name to the description of these interfaces, so that the kernel
|
name to the description of these interfaces, so that the kernel
|
||||||
developers can easily notify them if any changes occur.
|
developers can easily notify them if any changes occur.
|
||||||
|
|
||||||
.. kernel-abi:: $srctree/Documentation/ABI/testing
|
.. kernel-abi:: ABI/testing
|
||||||
:rst:
|
:rst:
|
||||||
|
@ -22,13 +22,16 @@ exclusive.
|
|||||||
3) object removal. Locking rules: caller locks parent, finds victim,
|
3) object removal. Locking rules: caller locks parent, finds victim,
|
||||||
locks victim and calls the method. Locks are exclusive.
|
locks victim and calls the method. Locks are exclusive.
|
||||||
|
|
||||||
4) rename() that is _not_ cross-directory. Locking rules: caller locks the
|
4) rename() that is _not_ cross-directory. Locking rules: caller locks
|
||||||
parent and finds source and target. We lock both (provided they exist). If we
|
the parent and finds source and target. Then we decide which of the
|
||||||
need to lock two inodes of different type (dir vs non-dir), we lock directory
|
source and target need to be locked. Source needs to be locked if it's a
|
||||||
first. If we need to lock two inodes of the same type, lock them in inode
|
non-directory; target - if it's a non-directory or about to be removed.
|
||||||
pointer order. Then call the method. All locks are exclusive.
|
Take the locks that need to be taken, in inode pointer order if need
|
||||||
NB: we might get away with locking the source (and target in exchange
|
to take both (that can happen only when both source and target are
|
||||||
case) shared.
|
non-directories - the source because it wouldn't be locked otherwise
|
||||||
|
and the target because mixing directory and non-directory is allowed
|
||||||
|
only with RENAME_EXCHANGE, and that won't be removing the target).
|
||||||
|
After the locks had been taken, call the method. All locks are exclusive.
|
||||||
|
|
||||||
5) link creation. Locking rules:
|
5) link creation. Locking rules:
|
||||||
|
|
||||||
@ -44,20 +47,17 @@ rules:
|
|||||||
|
|
||||||
* lock the filesystem
|
* lock the filesystem
|
||||||
* lock parents in "ancestors first" order. If one is not ancestor of
|
* lock parents in "ancestors first" order. If one is not ancestor of
|
||||||
the other, lock them in inode pointer order.
|
the other, lock the parent of source first.
|
||||||
* find source and target.
|
* find source and target.
|
||||||
* if old parent is equal to or is a descendent of target
|
* if old parent is equal to or is a descendent of target
|
||||||
fail with -ENOTEMPTY
|
fail with -ENOTEMPTY
|
||||||
* if new parent is equal to or is a descendent of source
|
* if new parent is equal to or is a descendent of source
|
||||||
fail with -ELOOP
|
fail with -ELOOP
|
||||||
* Lock both the source and the target provided they exist. If we
|
* Lock subdirectories involved (source before target).
|
||||||
need to lock two inodes of different type (dir vs non-dir), we lock
|
* Lock non-directories involved, in inode pointer order.
|
||||||
the directory first. If we need to lock two inodes of the same type,
|
|
||||||
lock them in inode pointer order.
|
|
||||||
* call the method.
|
* call the method.
|
||||||
|
|
||||||
All ->i_rwsem are taken exclusive. Again, we might get away with locking
|
All ->i_rwsem are taken exclusive.
|
||||||
the source (and target in exchange case) shared.
|
|
||||||
|
|
||||||
The rules above obviously guarantee that all directories that are going to be
|
The rules above obviously guarantee that all directories that are going to be
|
||||||
read, modified or removed by method will be locked by caller.
|
read, modified or removed by method will be locked by caller.
|
||||||
@ -67,6 +67,7 @@ If no directory is its own ancestor, the scheme above is deadlock-free.
|
|||||||
|
|
||||||
Proof:
|
Proof:
|
||||||
|
|
||||||
|
[XXX: will be updated once we are done massaging the lock_rename()]
|
||||||
First of all, at any moment we have a linear ordering of the
|
First of all, at any moment we have a linear ordering of the
|
||||||
objects - A < B iff (A is an ancestor of B) or (B is not an ancestor
|
objects - A < B iff (A is an ancestor of B) or (B is not an ancestor
|
||||||
of A and ptr(A) < ptr(B)).
|
of A and ptr(A) < ptr(B)).
|
||||||
|
@ -99,7 +99,7 @@ symlink: exclusive
|
|||||||
mkdir: exclusive
|
mkdir: exclusive
|
||||||
unlink: exclusive (both)
|
unlink: exclusive (both)
|
||||||
rmdir: exclusive (both)(see below)
|
rmdir: exclusive (both)(see below)
|
||||||
rename: exclusive (all) (see below)
|
rename: exclusive (both parents, some children) (see below)
|
||||||
readlink: no
|
readlink: no
|
||||||
get_link: no
|
get_link: no
|
||||||
setattr: exclusive
|
setattr: exclusive
|
||||||
@ -119,6 +119,9 @@ fileattr_set: exclusive
|
|||||||
Additionally, ->rmdir(), ->unlink() and ->rename() have ->i_rwsem
|
Additionally, ->rmdir(), ->unlink() and ->rename() have ->i_rwsem
|
||||||
exclusive on victim.
|
exclusive on victim.
|
||||||
cross-directory ->rename() has (per-superblock) ->s_vfs_rename_sem.
|
cross-directory ->rename() has (per-superblock) ->s_vfs_rename_sem.
|
||||||
|
->unlink() and ->rename() have ->i_rwsem exclusive on all non-directories
|
||||||
|
involved.
|
||||||
|
->rename() has ->i_rwsem exclusive on any subdirectory that changes parent.
|
||||||
|
|
||||||
See Documentation/filesystems/directory-locking.rst for more detailed discussion
|
See Documentation/filesystems/directory-locking.rst for more detailed discussion
|
||||||
of the locking scheme for directory operations.
|
of the locking scheme for directory operations.
|
||||||
|
@ -943,3 +943,21 @@ file pointer instead of struct dentry pointer. d_tmpfile() is similarly
|
|||||||
changed to simplify callers. The passed file is in a non-open state and on
|
changed to simplify callers. The passed file is in a non-open state and on
|
||||||
success must be opened before returning (e.g. by calling
|
success must be opened before returning (e.g. by calling
|
||||||
finish_open_simple()).
|
finish_open_simple()).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**mandatory**
|
||||||
|
|
||||||
|
If ->rename() update of .. on cross-directory move needs an exclusion with
|
||||||
|
directory modifications, do *not* lock the subdirectory in question in your
|
||||||
|
->rename() - it's done by the caller now [that item should've been added in
|
||||||
|
28eceeda130f "fs: Lock moved directories"].
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**mandatory**
|
||||||
|
|
||||||
|
On same-directory ->rename() the (tautological) update of .. is not protected
|
||||||
|
by any locks; just don't do it if the old parent is the same as the new one.
|
||||||
|
We really can't lock two subdirectories in same-directory rename - not without
|
||||||
|
deadlocks.
|
||||||
|
@ -39,8 +39,6 @@ import sys
|
|||||||
import re
|
import re
|
||||||
import kernellog
|
import kernellog
|
||||||
|
|
||||||
from os import path
|
|
||||||
|
|
||||||
from docutils import nodes, statemachine
|
from docutils import nodes, statemachine
|
||||||
from docutils.statemachine import ViewList
|
from docutils.statemachine import ViewList
|
||||||
from docutils.parsers.rst import directives, Directive
|
from docutils.parsers.rst import directives, Directive
|
||||||
@ -73,60 +71,26 @@ class KernelCmd(Directive):
|
|||||||
}
|
}
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
|
|
||||||
doc = self.state.document
|
doc = self.state.document
|
||||||
if not doc.settings.file_insertion_enabled:
|
if not doc.settings.file_insertion_enabled:
|
||||||
raise self.warning("docutils: file insertion disabled")
|
raise self.warning("docutils: file insertion disabled")
|
||||||
|
|
||||||
env = doc.settings.env
|
srctree = os.path.abspath(os.environ["srctree"])
|
||||||
cwd = path.dirname(doc.current_source)
|
|
||||||
cmd = "get_abi.pl rest --enable-lineno --dir "
|
args = [
|
||||||
cmd += self.arguments[0]
|
os.path.join(srctree, 'scripts/get_abi.pl'),
|
||||||
|
'rest',
|
||||||
|
'--enable-lineno',
|
||||||
|
'--dir', os.path.join(srctree, 'Documentation', self.arguments[0]),
|
||||||
|
]
|
||||||
|
|
||||||
if 'rst' in self.options:
|
if 'rst' in self.options:
|
||||||
cmd += " --rst-source"
|
args.append('--rst-source')
|
||||||
|
|
||||||
srctree = path.abspath(os.environ["srctree"])
|
lines = subprocess.check_output(args, cwd=os.path.dirname(doc.current_source)).decode('utf-8')
|
||||||
|
|
||||||
fname = cmd
|
|
||||||
|
|
||||||
# extend PATH with $(srctree)/scripts
|
|
||||||
path_env = os.pathsep.join([
|
|
||||||
srctree + os.sep + "scripts",
|
|
||||||
os.environ["PATH"]
|
|
||||||
])
|
|
||||||
shell_env = os.environ.copy()
|
|
||||||
shell_env["PATH"] = path_env
|
|
||||||
shell_env["srctree"] = srctree
|
|
||||||
|
|
||||||
lines = self.runCmd(cmd, shell=True, cwd=cwd, env=shell_env)
|
|
||||||
nodeList = self.nestedParse(lines, self.arguments[0])
|
nodeList = self.nestedParse(lines, self.arguments[0])
|
||||||
return nodeList
|
return nodeList
|
||||||
|
|
||||||
def runCmd(self, cmd, **kwargs):
|
|
||||||
u"""Run command ``cmd`` and return its stdout as unicode."""
|
|
||||||
|
|
||||||
try:
|
|
||||||
proc = subprocess.Popen(
|
|
||||||
cmd
|
|
||||||
, stdout = subprocess.PIPE
|
|
||||||
, stderr = subprocess.PIPE
|
|
||||||
, **kwargs
|
|
||||||
)
|
|
||||||
out, err = proc.communicate()
|
|
||||||
|
|
||||||
out, err = codecs.decode(out, 'utf-8'), codecs.decode(err, 'utf-8')
|
|
||||||
|
|
||||||
if proc.returncode != 0:
|
|
||||||
raise self.severe(
|
|
||||||
u"command '%s' failed with return code %d"
|
|
||||||
% (cmd, proc.returncode)
|
|
||||||
)
|
|
||||||
except OSError as exc:
|
|
||||||
raise self.severe(u"problems with '%s' directive: %s."
|
|
||||||
% (self.name, ErrorString(exc)))
|
|
||||||
return out
|
|
||||||
|
|
||||||
def nestedParse(self, lines, fname):
|
def nestedParse(self, lines, fname):
|
||||||
env = self.state.document.settings.env
|
env = self.state.document.settings.env
|
||||||
content = ViewList()
|
content = ViewList()
|
||||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 6
|
VERSION = 6
|
||||||
PATCHLEVEL = 1
|
PATCHLEVEL = 1
|
||||||
SUBLEVEL = 75
|
SUBLEVEL = 76
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Curry Ramen
|
NAME = Curry Ramen
|
||||||
|
|
||||||
|
@ -80,7 +80,7 @@ init_rtc_epoch(void)
|
|||||||
static int
|
static int
|
||||||
alpha_rtc_read_time(struct device *dev, struct rtc_time *tm)
|
alpha_rtc_read_time(struct device *dev, struct rtc_time *tm)
|
||||||
{
|
{
|
||||||
int ret = mc146818_get_time(tm);
|
int ret = mc146818_get_time(tm, 10);
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err_ratelimited(dev, "unable to read current time\n");
|
dev_err_ratelimited(dev, "unable to read current time\n");
|
||||||
|
@ -521,6 +521,14 @@ vtcam_reg: LDO12 {
|
|||||||
regulator-name = "VT_CAM_1.8V";
|
regulator-name = "VT_CAM_1.8V";
|
||||||
regulator-min-microvolt = <1800000>;
|
regulator-min-microvolt = <1800000>;
|
||||||
regulator-max-microvolt = <1800000>;
|
regulator-max-microvolt = <1800000>;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Force-enable this regulator; otherwise the
|
||||||
|
* kernel hangs very early in the boot process
|
||||||
|
* for about 12 seconds, without apparent
|
||||||
|
* reason.
|
||||||
|
*/
|
||||||
|
regulator-always-on;
|
||||||
};
|
};
|
||||||
|
|
||||||
vcclcd_reg: LDO13 {
|
vcclcd_reg: LDO13 {
|
||||||
|
@ -495,10 +495,10 @@ usb: usb@a6f8800 {
|
|||||||
<&gcc GCC_USB30_MASTER_CLK>;
|
<&gcc GCC_USB30_MASTER_CLK>;
|
||||||
assigned-clock-rates = <19200000>, <200000000>;
|
assigned-clock-rates = <19200000>, <200000000>;
|
||||||
|
|
||||||
interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 198 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc 51 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 158 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc 11 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>;
|
<&pdc 10 IRQ_TYPE_EDGE_BOTH>;
|
||||||
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
||||||
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
||||||
|
|
||||||
@ -522,7 +522,7 @@ pdc: interrupt-controller@b210000 {
|
|||||||
compatible = "qcom,sdx55-pdc", "qcom,pdc";
|
compatible = "qcom,sdx55-pdc", "qcom,pdc";
|
||||||
reg = <0x0b210000 0x30000>;
|
reg = <0x0b210000 0x30000>;
|
||||||
qcom,pdc-ranges = <0 179 52>;
|
qcom,pdc-ranges = <0 179 52>;
|
||||||
#interrupt-cells = <3>;
|
#interrupt-cells = <2>;
|
||||||
interrupt-parent = <&intc>;
|
interrupt-parent = <&intc>;
|
||||||
interrupt-controller;
|
interrupt-controller;
|
||||||
};
|
};
|
||||||
|
@ -970,6 +970,23 @@ config ARM64_ERRATUM_2457168
|
|||||||
|
|
||||||
If unsure, say Y.
|
If unsure, say Y.
|
||||||
|
|
||||||
|
config ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
|
||||||
|
bool
|
||||||
|
|
||||||
|
config ARM64_ERRATUM_2966298
|
||||||
|
bool "Cortex-A520: 2966298: workaround for speculatively executed unprivileged load"
|
||||||
|
select ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
|
||||||
|
default y
|
||||||
|
help
|
||||||
|
This option adds the workaround for ARM Cortex-A520 erratum 2966298.
|
||||||
|
|
||||||
|
On an affected Cortex-A520 core, a speculatively executed unprivileged
|
||||||
|
load might leak data from a privileged level via a cache side channel.
|
||||||
|
|
||||||
|
Work around this problem by executing a TLBI before returning to EL0.
|
||||||
|
|
||||||
|
If unsure, say Y.
|
||||||
|
|
||||||
config CAVIUM_ERRATUM_22375
|
config CAVIUM_ERRATUM_22375
|
||||||
bool "Cavium erratum 22375, 24313"
|
bool "Cavium erratum 22375, 24313"
|
||||||
default y
|
default y
|
||||||
|
@ -2769,8 +2769,8 @@ usb_1: usb@a6f8800 {
|
|||||||
|
|
||||||
interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc 6 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<&pdc 8 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc 8 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<&pdc 9 IRQ_TYPE_LEVEL_HIGH>;
|
<&pdc 9 IRQ_TYPE_EDGE_BOTH>;
|
||||||
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
||||||
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
||||||
|
|
||||||
|
@ -3664,9 +3664,9 @@ usb_1: usb@a6f8800 {
|
|||||||
assigned-clock-rates = <19200000>, <200000000>;
|
assigned-clock-rates = <19200000>, <200000000>;
|
||||||
|
|
||||||
interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<&pdc 14 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc 14 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<&pdc 15 IRQ_TYPE_EDGE_BOTH>,
|
<&pdc 15 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<&pdc 17 IRQ_TYPE_EDGE_BOTH>;
|
<&pdc 17 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
interrupt-names = "hs_phy_irq",
|
interrupt-names = "hs_phy_irq",
|
||||||
"dp_hs_phy_irq",
|
"dp_hs_phy_irq",
|
||||||
"dm_hs_phy_irq",
|
"dm_hs_phy_irq",
|
||||||
|
@ -4048,10 +4048,10 @@ usb_1: usb@a6f8800 {
|
|||||||
<&gcc GCC_USB30_PRIM_MASTER_CLK>;
|
<&gcc GCC_USB30_PRIM_MASTER_CLK>;
|
||||||
assigned-clock-rates = <19200000>, <150000000>;
|
assigned-clock-rates = <19200000>, <150000000>;
|
||||||
|
|
||||||
interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
|
<&intc GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc_intc 8 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
|
<&pdc_intc 9 IRQ_TYPE_EDGE_BOTH>;
|
||||||
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
||||||
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
||||||
|
|
||||||
@ -4099,10 +4099,10 @@ usb_2: usb@a8f8800 {
|
|||||||
<&gcc GCC_USB30_SEC_MASTER_CLK>;
|
<&gcc GCC_USB30_SEC_MASTER_CLK>;
|
||||||
assigned-clock-rates = <19200000>, <150000000>;
|
assigned-clock-rates = <19200000>, <150000000>;
|
||||||
|
|
||||||
interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
|
interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
|
<&intc GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 490 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc_intc 10 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<GIC_SPI 491 IRQ_TYPE_LEVEL_HIGH>;
|
<&pdc_intc 11 IRQ_TYPE_EDGE_BOTH>;
|
||||||
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
||||||
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
||||||
|
|
||||||
|
@ -3628,10 +3628,10 @@ usb_1: usb@a6f8800 {
|
|||||||
<&gcc GCC_USB30_PRIM_MASTER_CLK>;
|
<&gcc GCC_USB30_PRIM_MASTER_CLK>;
|
||||||
assigned-clock-rates = <19200000>, <200000000>;
|
assigned-clock-rates = <19200000>, <200000000>;
|
||||||
|
|
||||||
interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
interrupts-extended = <&intc GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
|
<&intc GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc 8 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>;
|
<&pdc 9 IRQ_TYPE_EDGE_BOTH>;
|
||||||
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
||||||
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
||||||
|
|
||||||
@ -3677,10 +3677,10 @@ usb_2: usb@a8f8800 {
|
|||||||
<&gcc GCC_USB30_SEC_MASTER_CLK>;
|
<&gcc GCC_USB30_SEC_MASTER_CLK>;
|
||||||
assigned-clock-rates = <19200000>, <200000000>;
|
assigned-clock-rates = <19200000>, <200000000>;
|
||||||
|
|
||||||
interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
|
interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
|
<&intc GIC_SPI 487 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
<GIC_SPI 490 IRQ_TYPE_LEVEL_HIGH>,
|
<&pdc 10 IRQ_TYPE_EDGE_BOTH>,
|
||||||
<GIC_SPI 491 IRQ_TYPE_LEVEL_HIGH>;
|
<&pdc 11 IRQ_TYPE_EDGE_BOTH>;
|
||||||
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
interrupt-names = "hs_phy_irq", "ss_phy_irq",
|
||||||
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
"dm_hs_phy_irq", "dp_hs_phy_irq";
|
||||||
|
|
||||||
|
@ -17,7 +17,8 @@
|
|||||||
# $3 - kernel map file
|
# $3 - kernel map file
|
||||||
# $4 - default install path (blank if root directory)
|
# $4 - default install path (blank if root directory)
|
||||||
|
|
||||||
if [ "$(basename $2)" = "Image.gz" ]; then
|
if [ "$(basename $2)" = "Image.gz" ] || [ "$(basename $2)" = "vmlinuz.efi" ]
|
||||||
|
then
|
||||||
# Compressed install
|
# Compressed install
|
||||||
echo "Installing compressed kernel"
|
echo "Installing compressed kernel"
|
||||||
base=vmlinuz
|
base=vmlinuz
|
||||||
|
@ -723,6 +723,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
|
|||||||
.cpu_enable = cpu_clear_bf16_from_user_emulation,
|
.cpu_enable = cpu_clear_bf16_from_user_emulation,
|
||||||
},
|
},
|
||||||
#endif
|
#endif
|
||||||
|
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
|
||||||
|
{
|
||||||
|
.desc = "ARM erratum 2966298",
|
||||||
|
.capability = ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD,
|
||||||
|
/* Cortex-A520 r0p0 - r0p1 */
|
||||||
|
ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A520, 0, 0, 1),
|
||||||
|
},
|
||||||
|
#endif
|
||||||
#ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
|
#ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
|
||||||
{
|
{
|
||||||
.desc = "AmpereOne erratum AC03_CPU_38",
|
.desc = "AmpereOne erratum AC03_CPU_38",
|
||||||
|
@ -419,6 +419,10 @@ alternative_else_nop_endif
|
|||||||
ldp x28, x29, [sp, #16 * 14]
|
ldp x28, x29, [sp, #16 * 14]
|
||||||
|
|
||||||
.if \el == 0
|
.if \el == 0
|
||||||
|
alternative_if ARM64_WORKAROUND_SPECULATIVE_UNPRIV_LOAD
|
||||||
|
tlbi vale1, xzr
|
||||||
|
dsb nsh
|
||||||
|
alternative_else_nop_endif
|
||||||
alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
|
alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0
|
||||||
ldr lr, [sp, #S_LR]
|
ldr lr, [sp, #S_LR]
|
||||||
add sp, sp, #PT_REGS_SIZE // restore sp
|
add sp, sp, #PT_REGS_SIZE // restore sp
|
||||||
|
@ -71,6 +71,7 @@ WORKAROUND_2064142
|
|||||||
WORKAROUND_2077057
|
WORKAROUND_2077057
|
||||||
WORKAROUND_2457168
|
WORKAROUND_2457168
|
||||||
WORKAROUND_2658417
|
WORKAROUND_2658417
|
||||||
|
WORKAROUND_SPECULATIVE_UNPRIV_LOAD
|
||||||
WORKAROUND_TRBE_OVERWRITE_FILL_MODE
|
WORKAROUND_TRBE_OVERWRITE_FILL_MODE
|
||||||
WORKAROUND_TSB_FLUSH_FAILURE
|
WORKAROUND_TSB_FLUSH_FAILURE
|
||||||
WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
|
WORKAROUND_TRBE_WRITE_OUT_OF_RANGE
|
||||||
|
@ -471,8 +471,9 @@ asmlinkage void start_secondary(void)
|
|||||||
unsigned int cpu;
|
unsigned int cpu;
|
||||||
|
|
||||||
sync_counter();
|
sync_counter();
|
||||||
cpu = smp_processor_id();
|
cpu = raw_smp_processor_id();
|
||||||
set_my_cpu_offset(per_cpu_offset(cpu));
|
set_my_cpu_offset(per_cpu_offset(cpu));
|
||||||
|
rcu_cpu_starting(cpu);
|
||||||
|
|
||||||
cpu_probe();
|
cpu_probe();
|
||||||
constant_clockevent_init();
|
constant_clockevent_init();
|
||||||
|
@ -11,6 +11,7 @@
|
|||||||
|
|
||||||
#include <asm/cpu-features.h>
|
#include <asm/cpu-features.h>
|
||||||
#include <asm/cpu-info.h>
|
#include <asm/cpu-info.h>
|
||||||
|
#include <asm/fpu.h>
|
||||||
|
|
||||||
#ifdef CONFIG_MIPS_FP_SUPPORT
|
#ifdef CONFIG_MIPS_FP_SUPPORT
|
||||||
|
|
||||||
@ -309,6 +310,11 @@ void mips_set_personality_nan(struct arch_elf_state *state)
|
|||||||
struct cpuinfo_mips *c = &boot_cpu_data;
|
struct cpuinfo_mips *c = &boot_cpu_data;
|
||||||
struct task_struct *t = current;
|
struct task_struct *t = current;
|
||||||
|
|
||||||
|
/* Do this early so t->thread.fpu.fcr31 won't be clobbered in case
|
||||||
|
* we are preempted before the lose_fpu(0) in start_thread.
|
||||||
|
*/
|
||||||
|
lose_fpu(0);
|
||||||
|
|
||||||
t->thread.fpu.fcr31 = c->fpu_csr31;
|
t->thread.fpu.fcr31 = c->fpu_csr31;
|
||||||
switch (state->nan_2008) {
|
switch (state->nan_2008) {
|
||||||
case 0:
|
case 0:
|
||||||
|
@ -114,10 +114,9 @@ void __init prom_init(void)
|
|||||||
prom_init_cmdline();
|
prom_init_cmdline();
|
||||||
|
|
||||||
#if defined(CONFIG_MIPS_MT_SMP)
|
#if defined(CONFIG_MIPS_MT_SMP)
|
||||||
if (cpu_has_mipsmt) {
|
lantiq_smp_ops = vsmp_smp_ops;
|
||||||
lantiq_smp_ops = vsmp_smp_ops;
|
if (cpu_has_mipsmt)
|
||||||
lantiq_smp_ops.init_secondary = lantiq_init_secondary;
|
lantiq_smp_ops.init_secondary = lantiq_init_secondary;
|
||||||
register_smp_ops(&lantiq_smp_ops);
|
register_smp_ops(&lantiq_smp_ops);
|
||||||
}
|
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
@ -417,7 +417,12 @@ void __init paging_init(void)
|
|||||||
(highend_pfn - max_low_pfn) << (PAGE_SHIFT - 10));
|
(highend_pfn - max_low_pfn) << (PAGE_SHIFT - 10));
|
||||||
max_zone_pfns[ZONE_HIGHMEM] = max_low_pfn;
|
max_zone_pfns[ZONE_HIGHMEM] = max_low_pfn;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
max_mapnr = highend_pfn ? highend_pfn : max_low_pfn;
|
||||||
|
#else
|
||||||
|
max_mapnr = max_low_pfn;
|
||||||
#endif
|
#endif
|
||||||
|
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
|
||||||
|
|
||||||
free_area_init(max_zone_pfns);
|
free_area_init(max_zone_pfns);
|
||||||
}
|
}
|
||||||
@ -453,13 +458,6 @@ void __init mem_init(void)
|
|||||||
*/
|
*/
|
||||||
BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (_PFN_SHIFT > PAGE_SHIFT));
|
BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (_PFN_SHIFT > PAGE_SHIFT));
|
||||||
|
|
||||||
#ifdef CONFIG_HIGHMEM
|
|
||||||
max_mapnr = highend_pfn ? highend_pfn : max_low_pfn;
|
|
||||||
#else
|
|
||||||
max_mapnr = max_low_pfn;
|
|
||||||
#endif
|
|
||||||
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
|
|
||||||
|
|
||||||
maar_init();
|
maar_init();
|
||||||
memblock_free_all();
|
memblock_free_all();
|
||||||
setup_zero_pages(); /* Setup zeroed pages. */
|
setup_zero_pages(); /* Setup zeroed pages. */
|
||||||
|
@ -123,10 +123,10 @@ static unsigned long f_extend(unsigned long address)
|
|||||||
#ifdef CONFIG_64BIT
|
#ifdef CONFIG_64BIT
|
||||||
if(unlikely(parisc_narrow_firmware)) {
|
if(unlikely(parisc_narrow_firmware)) {
|
||||||
if((address & 0xff000000) == 0xf0000000)
|
if((address & 0xff000000) == 0xf0000000)
|
||||||
return 0xf0f0f0f000000000UL | (u32)address;
|
return (0xfffffff0UL << 32) | (u32)address;
|
||||||
|
|
||||||
if((address & 0xf0000000) == 0xf0000000)
|
if((address & 0xf0000000) == 0xf0000000)
|
||||||
return 0xffffffff00000000UL | (u32)address;
|
return (0xffffffffUL << 32) | (u32)address;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
return address;
|
return address;
|
||||||
|
@ -806,7 +806,6 @@ config THREAD_SHIFT
|
|||||||
int "Thread shift" if EXPERT
|
int "Thread shift" if EXPERT
|
||||||
range 13 15
|
range 13 15
|
||||||
default "15" if PPC_256K_PAGES
|
default "15" if PPC_256K_PAGES
|
||||||
default "15" if PPC_PSERIES || PPC_POWERNV
|
|
||||||
default "14" if PPC64
|
default "14" if PPC64
|
||||||
default "13"
|
default "13"
|
||||||
help
|
help
|
||||||
|
@ -601,7 +601,9 @@ static int ctr_aes_crypt(struct skcipher_request *req)
|
|||||||
* final block may be < AES_BLOCK_SIZE, copy only nbytes
|
* final block may be < AES_BLOCK_SIZE, copy only nbytes
|
||||||
*/
|
*/
|
||||||
if (nbytes) {
|
if (nbytes) {
|
||||||
cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
|
memset(buf, 0, AES_BLOCK_SIZE);
|
||||||
|
memcpy(buf, walk.src.virt.addr, nbytes);
|
||||||
|
cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
|
||||||
AES_BLOCK_SIZE, walk.iv);
|
AES_BLOCK_SIZE, walk.iv);
|
||||||
memcpy(walk.dst.virt.addr, buf, nbytes);
|
memcpy(walk.dst.virt.addr, buf, nbytes);
|
||||||
crypto_inc(walk.iv, AES_BLOCK_SIZE);
|
crypto_inc(walk.iv, AES_BLOCK_SIZE);
|
||||||
|
@ -688,9 +688,11 @@ static int ctr_paes_crypt(struct skcipher_request *req)
|
|||||||
* final block may be < AES_BLOCK_SIZE, copy only nbytes
|
* final block may be < AES_BLOCK_SIZE, copy only nbytes
|
||||||
*/
|
*/
|
||||||
if (nbytes) {
|
if (nbytes) {
|
||||||
|
memset(buf, 0, AES_BLOCK_SIZE);
|
||||||
|
memcpy(buf, walk.src.virt.addr, nbytes);
|
||||||
while (1) {
|
while (1) {
|
||||||
if (cpacf_kmctr(ctx->fc, ¶m, buf,
|
if (cpacf_kmctr(ctx->fc, ¶m, buf,
|
||||||
walk.src.virt.addr, AES_BLOCK_SIZE,
|
buf, AES_BLOCK_SIZE,
|
||||||
walk.iv) == AES_BLOCK_SIZE)
|
walk.iv) == AES_BLOCK_SIZE)
|
||||||
break;
|
break;
|
||||||
if (__paes_convert_key(ctx))
|
if (__paes_convert_key(ctx))
|
||||||
|
@ -58,12 +58,29 @@ extern long __ia32_sys_ni_syscall(const struct pt_regs *regs);
|
|||||||
,,regs->di,,regs->si,,regs->dx \
|
,,regs->di,,regs->si,,regs->dx \
|
||||||
,,regs->r10,,regs->r8,,regs->r9) \
|
,,regs->r10,,regs->r8,,regs->r9) \
|
||||||
|
|
||||||
|
|
||||||
|
/* SYSCALL_PT_ARGS is Adapted from s390x */
|
||||||
|
#define SYSCALL_PT_ARG6(m, t1, t2, t3, t4, t5, t6) \
|
||||||
|
SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5), m(t6, (regs->bp))
|
||||||
|
#define SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5) \
|
||||||
|
SYSCALL_PT_ARG4(m, t1, t2, t3, t4), m(t5, (regs->di))
|
||||||
|
#define SYSCALL_PT_ARG4(m, t1, t2, t3, t4) \
|
||||||
|
SYSCALL_PT_ARG3(m, t1, t2, t3), m(t4, (regs->si))
|
||||||
|
#define SYSCALL_PT_ARG3(m, t1, t2, t3) \
|
||||||
|
SYSCALL_PT_ARG2(m, t1, t2), m(t3, (regs->dx))
|
||||||
|
#define SYSCALL_PT_ARG2(m, t1, t2) \
|
||||||
|
SYSCALL_PT_ARG1(m, t1), m(t2, (regs->cx))
|
||||||
|
#define SYSCALL_PT_ARG1(m, t1) m(t1, (regs->bx))
|
||||||
|
#define SYSCALL_PT_ARGS(x, ...) SYSCALL_PT_ARG##x(__VA_ARGS__)
|
||||||
|
|
||||||
|
#define __SC_COMPAT_CAST(t, a) \
|
||||||
|
(__typeof(__builtin_choose_expr(__TYPE_IS_L(t), 0, 0U))) \
|
||||||
|
(unsigned int)a
|
||||||
|
|
||||||
/* Mapping of registers to parameters for syscalls on i386 */
|
/* Mapping of registers to parameters for syscalls on i386 */
|
||||||
#define SC_IA32_REGS_TO_ARGS(x, ...) \
|
#define SC_IA32_REGS_TO_ARGS(x, ...) \
|
||||||
__MAP(x,__SC_ARGS \
|
SYSCALL_PT_ARGS(x, __SC_COMPAT_CAST, \
|
||||||
,,(unsigned int)regs->bx,,(unsigned int)regs->cx \
|
__MAP(x, __SC_TYPE, __VA_ARGS__)) \
|
||||||
,,(unsigned int)regs->dx,,(unsigned int)regs->si \
|
|
||||||
,,(unsigned int)regs->di,,(unsigned int)regs->bp)
|
|
||||||
|
|
||||||
#define __SYS_STUB0(abi, name) \
|
#define __SYS_STUB0(abi, name) \
|
||||||
long __##abi##_##name(const struct pt_regs *regs); \
|
long __##abi##_##name(const struct pt_regs *regs); \
|
||||||
|
@ -1436,7 +1436,7 @@ irqreturn_t hpet_rtc_interrupt(int irq, void *dev_id)
|
|||||||
memset(&curr_time, 0, sizeof(struct rtc_time));
|
memset(&curr_time, 0, sizeof(struct rtc_time));
|
||||||
|
|
||||||
if (hpet_rtc_flags & (RTC_UIE | RTC_AIE)) {
|
if (hpet_rtc_flags & (RTC_UIE | RTC_AIE)) {
|
||||||
if (unlikely(mc146818_get_time(&curr_time) < 0)) {
|
if (unlikely(mc146818_get_time(&curr_time, 10) < 0)) {
|
||||||
pr_err_ratelimited("unable to read current time from RTC\n");
|
pr_err_ratelimited("unable to read current time from RTC\n");
|
||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
@ -67,7 +67,7 @@ void mach_get_cmos_time(struct timespec64 *now)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mc146818_get_time(&tm)) {
|
if (mc146818_get_time(&tm, 1000)) {
|
||||||
pr_err("Unable to read current time from RTC\n");
|
pr_err("Unable to read current time from RTC\n");
|
||||||
now->tv_sec = now->tv_nsec = 0;
|
now->tv_sec = now->tv_nsec = 0;
|
||||||
return;
|
return;
|
||||||
|
@ -239,18 +239,6 @@ static bool nested_svm_check_bitmap_pa(struct kvm_vcpu *vcpu, u64 pa, u32 size)
|
|||||||
kvm_vcpu_is_legal_gpa(vcpu, addr + size - 1);
|
kvm_vcpu_is_legal_gpa(vcpu, addr + size - 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool nested_svm_check_tlb_ctl(struct kvm_vcpu *vcpu, u8 tlb_ctl)
|
|
||||||
{
|
|
||||||
/* Nested FLUSHBYASID is not supported yet. */
|
|
||||||
switch(tlb_ctl) {
|
|
||||||
case TLB_CONTROL_DO_NOTHING:
|
|
||||||
case TLB_CONTROL_FLUSH_ALL_ASID:
|
|
||||||
return true;
|
|
||||||
default:
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
|
static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
|
||||||
struct vmcb_ctrl_area_cached *control)
|
struct vmcb_ctrl_area_cached *control)
|
||||||
{
|
{
|
||||||
@ -270,8 +258,6 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
|
|||||||
IOPM_SIZE)))
|
IOPM_SIZE)))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
if (CC(!nested_svm_check_tlb_ctl(vcpu, control->tlb_ctl)))
|
|
||||||
return false;
|
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
@ -20,8 +20,6 @@ static int blkpg_do_ioctl(struct block_device *bdev,
|
|||||||
struct blkpg_partition p;
|
struct blkpg_partition p;
|
||||||
sector_t start, length;
|
sector_t start, length;
|
||||||
|
|
||||||
if (disk->flags & GENHD_FL_NO_PART)
|
|
||||||
return -EINVAL;
|
|
||||||
if (!capable(CAP_SYS_ADMIN))
|
if (!capable(CAP_SYS_ADMIN))
|
||||||
return -EACCES;
|
return -EACCES;
|
||||||
if (copy_from_user(&p, upart, sizeof(struct blkpg_partition)))
|
if (copy_from_user(&p, upart, sizeof(struct blkpg_partition)))
|
||||||
|
@ -453,6 +453,11 @@ int bdev_add_partition(struct gendisk *disk, int partno, sector_t start,
|
|||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (disk->flags & GENHD_FL_NO_PART) {
|
||||||
|
ret = -EINVAL;
|
||||||
|
goto out;
|
||||||
|
}
|
||||||
|
|
||||||
if (partition_overlaps(disk, start, length, -1)) {
|
if (partition_overlaps(disk, start, length, -1)) {
|
||||||
ret = -EBUSY;
|
ret = -EBUSY;
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -329,6 +329,7 @@ __crypto_register_alg(struct crypto_alg *alg, struct list_head *algs_to_put)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (!strcmp(q->cra_driver_name, alg->cra_name) ||
|
if (!strcmp(q->cra_driver_name, alg->cra_name) ||
|
||||||
|
!strcmp(q->cra_driver_name, alg->cra_driver_name) ||
|
||||||
!strcmp(q->cra_name, alg->cra_driver_name))
|
!strcmp(q->cra_name, alg->cra_driver_name))
|
||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
|
@ -690,7 +690,7 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
|
|||||||
|
|
||||||
static void async_resume_noirq(void *data, async_cookie_t cookie)
|
static void async_resume_noirq(void *data, async_cookie_t cookie)
|
||||||
{
|
{
|
||||||
struct device *dev = (struct device *)data;
|
struct device *dev = data;
|
||||||
|
|
||||||
__device_resume_noirq(dev, pm_transition, true);
|
__device_resume_noirq(dev, pm_transition, true);
|
||||||
put_device(dev);
|
put_device(dev);
|
||||||
@ -819,7 +819,7 @@ static void __device_resume_early(struct device *dev, pm_message_t state, bool a
|
|||||||
|
|
||||||
static void async_resume_early(void *data, async_cookie_t cookie)
|
static void async_resume_early(void *data, async_cookie_t cookie)
|
||||||
{
|
{
|
||||||
struct device *dev = (struct device *)data;
|
struct device *dev = data;
|
||||||
|
|
||||||
__device_resume_early(dev, pm_transition, true);
|
__device_resume_early(dev, pm_transition, true);
|
||||||
put_device(dev);
|
put_device(dev);
|
||||||
@ -974,7 +974,7 @@ static void __device_resume(struct device *dev, pm_message_t state, bool async)
|
|||||||
|
|
||||||
static void async_resume(void *data, async_cookie_t cookie)
|
static void async_resume(void *data, async_cookie_t cookie)
|
||||||
{
|
{
|
||||||
struct device *dev = (struct device *)data;
|
struct device *dev = data;
|
||||||
|
|
||||||
__device_resume(dev, pm_transition, true);
|
__device_resume(dev, pm_transition, true);
|
||||||
put_device(dev);
|
put_device(dev);
|
||||||
@ -1260,7 +1260,7 @@ static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool a
|
|||||||
|
|
||||||
static void async_suspend_noirq(void *data, async_cookie_t cookie)
|
static void async_suspend_noirq(void *data, async_cookie_t cookie)
|
||||||
{
|
{
|
||||||
struct device *dev = (struct device *)data;
|
struct device *dev = data;
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = __device_suspend_noirq(dev, pm_transition, true);
|
error = __device_suspend_noirq(dev, pm_transition, true);
|
||||||
@ -1443,7 +1443,7 @@ static int __device_suspend_late(struct device *dev, pm_message_t state, bool as
|
|||||||
|
|
||||||
static void async_suspend_late(void *data, async_cookie_t cookie)
|
static void async_suspend_late(void *data, async_cookie_t cookie)
|
||||||
{
|
{
|
||||||
struct device *dev = (struct device *)data;
|
struct device *dev = data;
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = __device_suspend_late(dev, pm_transition, true);
|
error = __device_suspend_late(dev, pm_transition, true);
|
||||||
@ -1723,7 +1723,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
|
|||||||
|
|
||||||
static void async_suspend(void *data, async_cookie_t cookie)
|
static void async_suspend(void *data, async_cookie_t cookie)
|
||||||
{
|
{
|
||||||
struct device *dev = (struct device *)data;
|
struct device *dev = data;
|
||||||
int error;
|
int error;
|
||||||
|
|
||||||
error = __device_suspend(dev, pm_transition, true);
|
error = __device_suspend(dev, pm_transition, true);
|
||||||
|
@ -120,7 +120,7 @@ static unsigned int read_magic_time(void)
|
|||||||
struct rtc_time time;
|
struct rtc_time time;
|
||||||
unsigned int val;
|
unsigned int val;
|
||||||
|
|
||||||
if (mc146818_get_time(&time) < 0) {
|
if (mc146818_get_time(&time, 1000) < 0) {
|
||||||
pr_err("Unable to read current time from RTC\n");
|
pr_err("Unable to read current time from RTC\n");
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -494,7 +494,7 @@ static int __sock_xmit(struct nbd_device *nbd, struct socket *sock, int send,
|
|||||||
struct iov_iter *iter, int msg_flags, int *sent)
|
struct iov_iter *iter, int msg_flags, int *sent)
|
||||||
{
|
{
|
||||||
int result;
|
int result;
|
||||||
struct msghdr msg;
|
struct msghdr msg = { };
|
||||||
unsigned int noreclaim_flag;
|
unsigned int noreclaim_flag;
|
||||||
|
|
||||||
if (unlikely(!sock)) {
|
if (unlikely(!sock)) {
|
||||||
@ -509,10 +509,6 @@ static int __sock_xmit(struct nbd_device *nbd, struct socket *sock, int send,
|
|||||||
noreclaim_flag = memalloc_noreclaim_save();
|
noreclaim_flag = memalloc_noreclaim_save();
|
||||||
do {
|
do {
|
||||||
sock->sk->sk_allocation = GFP_NOIO | __GFP_MEMALLOC;
|
sock->sk->sk_allocation = GFP_NOIO | __GFP_MEMALLOC;
|
||||||
msg.msg_name = NULL;
|
|
||||||
msg.msg_namelen = 0;
|
|
||||||
msg.msg_control = NULL;
|
|
||||||
msg.msg_controllen = 0;
|
|
||||||
msg.msg_flags = msg_flags | MSG_NOSIGNAL;
|
msg.msg_flags = msg_flags | MSG_NOSIGNAL;
|
||||||
|
|
||||||
if (send)
|
if (send)
|
||||||
|
@ -3453,14 +3453,15 @@ static bool rbd_lock_add_request(struct rbd_img_request *img_req)
|
|||||||
static void rbd_lock_del_request(struct rbd_img_request *img_req)
|
static void rbd_lock_del_request(struct rbd_img_request *img_req)
|
||||||
{
|
{
|
||||||
struct rbd_device *rbd_dev = img_req->rbd_dev;
|
struct rbd_device *rbd_dev = img_req->rbd_dev;
|
||||||
bool need_wakeup;
|
bool need_wakeup = false;
|
||||||
|
|
||||||
lockdep_assert_held(&rbd_dev->lock_rwsem);
|
lockdep_assert_held(&rbd_dev->lock_rwsem);
|
||||||
spin_lock(&rbd_dev->lock_lists_lock);
|
spin_lock(&rbd_dev->lock_lists_lock);
|
||||||
rbd_assert(!list_empty(&img_req->lock_item));
|
if (!list_empty(&img_req->lock_item)) {
|
||||||
list_del_init(&img_req->lock_item);
|
list_del_init(&img_req->lock_item);
|
||||||
need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
|
need_wakeup = (rbd_dev->lock_state == RBD_LOCK_STATE_RELEASING &&
|
||||||
list_empty(&rbd_dev->running_list));
|
list_empty(&rbd_dev->running_list));
|
||||||
|
}
|
||||||
spin_unlock(&rbd_dev->lock_lists_lock);
|
spin_unlock(&rbd_dev->lock_lists_lock);
|
||||||
if (need_wakeup)
|
if (need_wakeup)
|
||||||
complete(&rbd_dev->releasing_wait);
|
complete(&rbd_dev->releasing_wait);
|
||||||
@ -3843,14 +3844,19 @@ static void wake_lock_waiters(struct rbd_device *rbd_dev, int result)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
list_for_each_entry(img_req, &rbd_dev->acquiring_list, lock_item) {
|
while (!list_empty(&rbd_dev->acquiring_list)) {
|
||||||
|
img_req = list_first_entry(&rbd_dev->acquiring_list,
|
||||||
|
struct rbd_img_request, lock_item);
|
||||||
mutex_lock(&img_req->state_mutex);
|
mutex_lock(&img_req->state_mutex);
|
||||||
rbd_assert(img_req->state == RBD_IMG_EXCLUSIVE_LOCK);
|
rbd_assert(img_req->state == RBD_IMG_EXCLUSIVE_LOCK);
|
||||||
|
if (!result)
|
||||||
|
list_move_tail(&img_req->lock_item,
|
||||||
|
&rbd_dev->running_list);
|
||||||
|
else
|
||||||
|
list_del_init(&img_req->lock_item);
|
||||||
rbd_img_schedule(img_req, result);
|
rbd_img_schedule(img_req, result);
|
||||||
mutex_unlock(&img_req->state_mutex);
|
mutex_unlock(&img_req->state_mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool locker_equal(const struct ceph_locker *lhs,
|
static bool locker_equal(const struct ceph_locker *lhs,
|
||||||
|
@ -71,45 +71,77 @@ static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
|
|||||||
static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
|
static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
|
||||||
struct mhi_ring_element *tre, u32 len, enum mhi_ev_ccs code)
|
struct mhi_ring_element *tre, u32 len, enum mhi_ev_ccs code)
|
||||||
{
|
{
|
||||||
struct mhi_ring_element event = {};
|
struct mhi_ring_element *event;
|
||||||
|
int ret;
|
||||||
|
|
||||||
event.ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(*tre));
|
event = kzalloc(sizeof(struct mhi_ring_element), GFP_KERNEL);
|
||||||
event.dword[0] = MHI_TRE_EV_DWORD0(code, len);
|
if (!event)
|
||||||
event.dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
|
return -ENOMEM;
|
||||||
|
|
||||||
return mhi_ep_send_event(mhi_cntrl, ring->er_index, &event, MHI_TRE_DATA_GET_BEI(tre));
|
event->ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(*tre));
|
||||||
|
event->dword[0] = MHI_TRE_EV_DWORD0(code, len);
|
||||||
|
event->dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
|
||||||
|
|
||||||
|
ret = mhi_ep_send_event(mhi_cntrl, ring->er_index, event, MHI_TRE_DATA_GET_BEI(tre));
|
||||||
|
kfree(event);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
|
int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
|
||||||
{
|
{
|
||||||
struct mhi_ring_element event = {};
|
struct mhi_ring_element *event;
|
||||||
|
int ret;
|
||||||
|
|
||||||
event.dword[0] = MHI_SC_EV_DWORD0(state);
|
event = kzalloc(sizeof(struct mhi_ring_element), GFP_KERNEL);
|
||||||
event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
|
if (!event)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
|
event->dword[0] = MHI_SC_EV_DWORD0(state);
|
||||||
|
event->dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
|
||||||
|
|
||||||
|
ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
|
||||||
|
kfree(event);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env)
|
int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env)
|
||||||
{
|
{
|
||||||
struct mhi_ring_element event = {};
|
struct mhi_ring_element *event;
|
||||||
|
int ret;
|
||||||
|
|
||||||
event.dword[0] = MHI_EE_EV_DWORD0(exec_env);
|
event = kzalloc(sizeof(struct mhi_ring_element), GFP_KERNEL);
|
||||||
event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
|
if (!event)
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
|
event->dword[0] = MHI_EE_EV_DWORD0(exec_env);
|
||||||
|
event->dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
|
||||||
|
|
||||||
|
ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
|
||||||
|
kfree(event);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
|
static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
|
||||||
{
|
{
|
||||||
struct mhi_ep_ring *ring = &mhi_cntrl->mhi_cmd->ring;
|
struct mhi_ep_ring *ring = &mhi_cntrl->mhi_cmd->ring;
|
||||||
struct mhi_ring_element event = {};
|
struct mhi_ring_element *event;
|
||||||
|
int ret;
|
||||||
|
|
||||||
event.ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(struct mhi_ring_element));
|
event = kzalloc(sizeof(struct mhi_ring_element), GFP_KERNEL);
|
||||||
event.dword[0] = MHI_CC_EV_DWORD0(code);
|
if (!event)
|
||||||
event.dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
|
return -ENOMEM;
|
||||||
|
|
||||||
return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
|
event->ptr = cpu_to_le64(ring->rbase + ring->rd_offset * sizeof(struct mhi_ring_element));
|
||||||
|
event->dword[0] = MHI_CC_EV_DWORD0(code);
|
||||||
|
event->dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
|
||||||
|
|
||||||
|
ret = mhi_ep_send_event(mhi_cntrl, 0, event, 0);
|
||||||
|
kfree(event);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
|
static int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
|
||||||
|
@ -268,7 +268,8 @@ static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
|
|||||||
|
|
||||||
static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
|
static bool is_valid_ring_ptr(struct mhi_ring *ring, dma_addr_t addr)
|
||||||
{
|
{
|
||||||
return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len;
|
return addr >= ring->iommu_base && addr < ring->iommu_base + ring->len &&
|
||||||
|
!(addr & (sizeof(struct mhi_ring_element) - 1));
|
||||||
}
|
}
|
||||||
|
|
||||||
int mhi_destroy_device(struct device *dev, void *data)
|
int mhi_destroy_device(struct device *dev, void *data)
|
||||||
@ -642,6 +643,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
|
|||||||
mhi_del_ring_element(mhi_cntrl, tre_ring);
|
mhi_del_ring_element(mhi_cntrl, tre_ring);
|
||||||
local_rp = tre_ring->rp;
|
local_rp = tre_ring->rp;
|
||||||
|
|
||||||
|
read_unlock_bh(&mhi_chan->lock);
|
||||||
|
|
||||||
/* notify client */
|
/* notify client */
|
||||||
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
|
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
|
||||||
|
|
||||||
@ -667,6 +670,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
|
|||||||
kfree(buf_info->cb_buf);
|
kfree(buf_info->cb_buf);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
read_lock_bh(&mhi_chan->lock);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
} /* CC_EOT */
|
} /* CC_EOT */
|
||||||
@ -1119,17 +1124,15 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
|
|||||||
if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
|
if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
|
||||||
return -EIO;
|
return -EIO;
|
||||||
|
|
||||||
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
|
|
||||||
|
|
||||||
ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
|
ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
|
||||||
if (unlikely(ret)) {
|
if (unlikely(ret))
|
||||||
ret = -EAGAIN;
|
return -EAGAIN;
|
||||||
goto exit_unlock;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf_info, mflags);
|
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf_info, mflags);
|
||||||
if (unlikely(ret))
|
if (unlikely(ret))
|
||||||
goto exit_unlock;
|
return ret;
|
||||||
|
|
||||||
|
read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
|
||||||
|
|
||||||
/* Packet is queued, take a usage ref to exit M3 if necessary
|
/* Packet is queued, take a usage ref to exit M3 if necessary
|
||||||
* for host->device buffer, balanced put is done on buffer completion
|
* for host->device buffer, balanced put is done on buffer completion
|
||||||
@ -1149,7 +1152,6 @@ static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
|
|||||||
if (dir == DMA_FROM_DEVICE)
|
if (dir == DMA_FROM_DEVICE)
|
||||||
mhi_cntrl->runtime_put(mhi_cntrl);
|
mhi_cntrl->runtime_put(mhi_cntrl);
|
||||||
|
|
||||||
exit_unlock:
|
|
||||||
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
|
read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
@ -1201,6 +1203,9 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
|
|||||||
int eot, eob, chain, bei;
|
int eot, eob, chain, bei;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
/* Protect accesses for reading and incrementing WP */
|
||||||
|
write_lock_bh(&mhi_chan->lock);
|
||||||
|
|
||||||
buf_ring = &mhi_chan->buf_ring;
|
buf_ring = &mhi_chan->buf_ring;
|
||||||
tre_ring = &mhi_chan->tre_ring;
|
tre_ring = &mhi_chan->tre_ring;
|
||||||
|
|
||||||
@ -1218,8 +1223,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
|
|||||||
|
|
||||||
if (!info->pre_mapped) {
|
if (!info->pre_mapped) {
|
||||||
ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
|
ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
|
||||||
if (ret)
|
if (ret) {
|
||||||
|
write_unlock_bh(&mhi_chan->lock);
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
eob = !!(flags & MHI_EOB);
|
eob = !!(flags & MHI_EOB);
|
||||||
@ -1236,6 +1243,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
|
|||||||
mhi_add_ring_element(mhi_cntrl, tre_ring);
|
mhi_add_ring_element(mhi_cntrl, tre_ring);
|
||||||
mhi_add_ring_element(mhi_cntrl, buf_ring);
|
mhi_add_ring_element(mhi_cntrl, buf_ring);
|
||||||
|
|
||||||
|
write_unlock_bh(&mhi_chan->lock);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -24,10 +24,13 @@
|
|||||||
#include <linux/random.h>
|
#include <linux/random.h>
|
||||||
#include <linux/sched.h>
|
#include <linux/sched.h>
|
||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
|
#include <linux/string.h>
|
||||||
#include <linux/uaccess.h>
|
#include <linux/uaccess.h>
|
||||||
|
|
||||||
#define RNG_MODULE_NAME "hw_random"
|
#define RNG_MODULE_NAME "hw_random"
|
||||||
|
|
||||||
|
#define RNG_BUFFER_SIZE (SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES)
|
||||||
|
|
||||||
static struct hwrng *current_rng;
|
static struct hwrng *current_rng;
|
||||||
/* the current rng has been explicitly chosen by user via sysfs */
|
/* the current rng has been explicitly chosen by user via sysfs */
|
||||||
static int cur_rng_set_by_user;
|
static int cur_rng_set_by_user;
|
||||||
@ -59,7 +62,7 @@ static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
|
|||||||
|
|
||||||
static size_t rng_buffer_size(void)
|
static size_t rng_buffer_size(void)
|
||||||
{
|
{
|
||||||
return SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES;
|
return RNG_BUFFER_SIZE;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void add_early_randomness(struct hwrng *rng)
|
static void add_early_randomness(struct hwrng *rng)
|
||||||
@ -211,6 +214,7 @@ static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
|
|||||||
static ssize_t rng_dev_read(struct file *filp, char __user *buf,
|
static ssize_t rng_dev_read(struct file *filp, char __user *buf,
|
||||||
size_t size, loff_t *offp)
|
size_t size, loff_t *offp)
|
||||||
{
|
{
|
||||||
|
u8 buffer[RNG_BUFFER_SIZE];
|
||||||
ssize_t ret = 0;
|
ssize_t ret = 0;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
int bytes_read, len;
|
int bytes_read, len;
|
||||||
@ -238,34 +242,37 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
|
|||||||
if (bytes_read < 0) {
|
if (bytes_read < 0) {
|
||||||
err = bytes_read;
|
err = bytes_read;
|
||||||
goto out_unlock_reading;
|
goto out_unlock_reading;
|
||||||
}
|
} else if (bytes_read == 0 &&
|
||||||
data_avail = bytes_read;
|
(filp->f_flags & O_NONBLOCK)) {
|
||||||
}
|
|
||||||
|
|
||||||
if (!data_avail) {
|
|
||||||
if (filp->f_flags & O_NONBLOCK) {
|
|
||||||
err = -EAGAIN;
|
err = -EAGAIN;
|
||||||
goto out_unlock_reading;
|
goto out_unlock_reading;
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
len = data_avail;
|
data_avail = bytes_read;
|
||||||
|
}
|
||||||
|
|
||||||
|
len = data_avail;
|
||||||
|
if (len) {
|
||||||
if (len > size)
|
if (len > size)
|
||||||
len = size;
|
len = size;
|
||||||
|
|
||||||
data_avail -= len;
|
data_avail -= len;
|
||||||
|
|
||||||
if (copy_to_user(buf + ret, rng_buffer + data_avail,
|
memcpy(buffer, rng_buffer + data_avail, len);
|
||||||
len)) {
|
}
|
||||||
|
mutex_unlock(&reading_mutex);
|
||||||
|
put_rng(rng);
|
||||||
|
|
||||||
|
if (len) {
|
||||||
|
if (copy_to_user(buf + ret, buffer, len)) {
|
||||||
err = -EFAULT;
|
err = -EFAULT;
|
||||||
goto out_unlock_reading;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
size -= len;
|
size -= len;
|
||||||
ret += len;
|
ret += len;
|
||||||
}
|
}
|
||||||
|
|
||||||
mutex_unlock(&reading_mutex);
|
|
||||||
put_rng(rng);
|
|
||||||
|
|
||||||
if (need_resched())
|
if (need_resched())
|
||||||
schedule_timeout_interruptible(1);
|
schedule_timeout_interruptible(1);
|
||||||
@ -276,6 +283,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
out:
|
out:
|
||||||
|
memzero_explicit(buffer, sizeof(buffer));
|
||||||
return ret ? : err;
|
return ret ? : err;
|
||||||
|
|
||||||
out_unlock_reading:
|
out_unlock_reading:
|
||||||
|
@ -493,6 +493,30 @@ static inline int intel_pstate_get_cppc_guaranteed(int cpu)
|
|||||||
}
|
}
|
||||||
#endif /* CONFIG_ACPI_CPPC_LIB */
|
#endif /* CONFIG_ACPI_CPPC_LIB */
|
||||||
|
|
||||||
|
static int intel_pstate_freq_to_hwp_rel(struct cpudata *cpu, int freq,
|
||||||
|
unsigned int relation)
|
||||||
|
{
|
||||||
|
if (freq == cpu->pstate.turbo_freq)
|
||||||
|
return cpu->pstate.turbo_pstate;
|
||||||
|
|
||||||
|
if (freq == cpu->pstate.max_freq)
|
||||||
|
return cpu->pstate.max_pstate;
|
||||||
|
|
||||||
|
switch (relation) {
|
||||||
|
case CPUFREQ_RELATION_H:
|
||||||
|
return freq / cpu->pstate.scaling;
|
||||||
|
case CPUFREQ_RELATION_C:
|
||||||
|
return DIV_ROUND_CLOSEST(freq, cpu->pstate.scaling);
|
||||||
|
}
|
||||||
|
|
||||||
|
return DIV_ROUND_UP(freq, cpu->pstate.scaling);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int intel_pstate_freq_to_hwp(struct cpudata *cpu, int freq)
|
||||||
|
{
|
||||||
|
return intel_pstate_freq_to_hwp_rel(cpu, freq, CPUFREQ_RELATION_L);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* intel_pstate_hybrid_hwp_adjust - Calibrate HWP performance levels.
|
* intel_pstate_hybrid_hwp_adjust - Calibrate HWP performance levels.
|
||||||
* @cpu: Target CPU.
|
* @cpu: Target CPU.
|
||||||
@ -510,6 +534,7 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu)
|
|||||||
int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
|
int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling;
|
||||||
int perf_ctl_turbo = pstate_funcs.get_turbo(cpu->cpu);
|
int perf_ctl_turbo = pstate_funcs.get_turbo(cpu->cpu);
|
||||||
int scaling = cpu->pstate.scaling;
|
int scaling = cpu->pstate.scaling;
|
||||||
|
int freq;
|
||||||
|
|
||||||
pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys);
|
pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys);
|
||||||
pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo);
|
pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo);
|
||||||
@ -523,16 +548,16 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu)
|
|||||||
cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling,
|
cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling,
|
||||||
perf_ctl_scaling);
|
perf_ctl_scaling);
|
||||||
|
|
||||||
cpu->pstate.max_pstate_physical =
|
freq = perf_ctl_max_phys * perf_ctl_scaling;
|
||||||
DIV_ROUND_UP(perf_ctl_max_phys * perf_ctl_scaling,
|
cpu->pstate.max_pstate_physical = intel_pstate_freq_to_hwp(cpu, freq);
|
||||||
scaling);
|
|
||||||
|
|
||||||
cpu->pstate.min_freq = cpu->pstate.min_pstate * perf_ctl_scaling;
|
freq = cpu->pstate.min_pstate * perf_ctl_scaling;
|
||||||
|
cpu->pstate.min_freq = freq;
|
||||||
/*
|
/*
|
||||||
* Cast the min P-state value retrieved via pstate_funcs.get_min() to
|
* Cast the min P-state value retrieved via pstate_funcs.get_min() to
|
||||||
* the effective range of HWP performance levels.
|
* the effective range of HWP performance levels.
|
||||||
*/
|
*/
|
||||||
cpu->pstate.min_pstate = DIV_ROUND_UP(cpu->pstate.min_freq, scaling);
|
cpu->pstate.min_pstate = intel_pstate_freq_to_hwp(cpu, freq);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline void update_turbo_state(void)
|
static inline void update_turbo_state(void)
|
||||||
@ -2493,13 +2518,12 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu,
|
|||||||
* abstract values to represent performance rather than pure ratios.
|
* abstract values to represent performance rather than pure ratios.
|
||||||
*/
|
*/
|
||||||
if (hwp_active && cpu->pstate.scaling != perf_ctl_scaling) {
|
if (hwp_active && cpu->pstate.scaling != perf_ctl_scaling) {
|
||||||
int scaling = cpu->pstate.scaling;
|
|
||||||
int freq;
|
int freq;
|
||||||
|
|
||||||
freq = max_policy_perf * perf_ctl_scaling;
|
freq = max_policy_perf * perf_ctl_scaling;
|
||||||
max_policy_perf = DIV_ROUND_UP(freq, scaling);
|
max_policy_perf = intel_pstate_freq_to_hwp(cpu, freq);
|
||||||
freq = min_policy_perf * perf_ctl_scaling;
|
freq = min_policy_perf * perf_ctl_scaling;
|
||||||
min_policy_perf = DIV_ROUND_UP(freq, scaling);
|
min_policy_perf = intel_pstate_freq_to_hwp(cpu, freq);
|
||||||
}
|
}
|
||||||
|
|
||||||
pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n",
|
pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n",
|
||||||
@ -2873,18 +2897,7 @@ static int intel_cpufreq_target(struct cpufreq_policy *policy,
|
|||||||
|
|
||||||
cpufreq_freq_transition_begin(policy, &freqs);
|
cpufreq_freq_transition_begin(policy, &freqs);
|
||||||
|
|
||||||
switch (relation) {
|
target_pstate = intel_pstate_freq_to_hwp_rel(cpu, freqs.new, relation);
|
||||||
case CPUFREQ_RELATION_L:
|
|
||||||
target_pstate = DIV_ROUND_UP(freqs.new, cpu->pstate.scaling);
|
|
||||||
break;
|
|
||||||
case CPUFREQ_RELATION_H:
|
|
||||||
target_pstate = freqs.new / cpu->pstate.scaling;
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
target_pstate = DIV_ROUND_CLOSEST(freqs.new, cpu->pstate.scaling);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, false);
|
target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, false);
|
||||||
|
|
||||||
freqs.new = target_pstate * cpu->pstate.scaling;
|
freqs.new = target_pstate * cpu->pstate.scaling;
|
||||||
@ -2902,7 +2915,7 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy,
|
|||||||
|
|
||||||
update_turbo_state();
|
update_turbo_state();
|
||||||
|
|
||||||
target_pstate = DIV_ROUND_UP(target_freq, cpu->pstate.scaling);
|
target_pstate = intel_pstate_freq_to_hwp(cpu, target_freq);
|
||||||
|
|
||||||
target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, true);
|
target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, true);
|
||||||
|
|
||||||
|
@ -450,7 +450,7 @@ static int alloc_hpa(struct cxl_region *cxlr, resource_size_t size)
|
|||||||
struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
|
struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent);
|
||||||
struct cxl_region_params *p = &cxlr->params;
|
struct cxl_region_params *p = &cxlr->params;
|
||||||
struct resource *res;
|
struct resource *res;
|
||||||
u32 remainder = 0;
|
u64 remainder = 0;
|
||||||
|
|
||||||
lockdep_assert_held_write(&cxl_region_rwsem);
|
lockdep_assert_held_write(&cxl_region_rwsem);
|
||||||
|
|
||||||
@ -470,7 +470,7 @@ static int alloc_hpa(struct cxl_region *cxlr, resource_size_t size)
|
|||||||
(cxlr->mode == CXL_DECODER_PMEM && uuid_is_null(&p->uuid)))
|
(cxlr->mode == CXL_DECODER_PMEM && uuid_is_null(&p->uuid)))
|
||||||
return -ENXIO;
|
return -ENXIO;
|
||||||
|
|
||||||
div_u64_rem(size, SZ_256M * p->interleave_ways, &remainder);
|
div64_u64_rem(size, (u64)SZ_256M * p->interleave_ways, &remainder);
|
||||||
if (remainder)
|
if (remainder)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
@ -1707,7 +1707,7 @@ static ssize_t trans_stat_show(struct device *dev,
|
|||||||
struct device_attribute *attr, char *buf)
|
struct device_attribute *attr, char *buf)
|
||||||
{
|
{
|
||||||
struct devfreq *df = to_devfreq(dev);
|
struct devfreq *df = to_devfreq(dev);
|
||||||
ssize_t len;
|
ssize_t len = 0;
|
||||||
int i, j;
|
int i, j;
|
||||||
unsigned int max_state;
|
unsigned int max_state;
|
||||||
|
|
||||||
@ -1716,7 +1716,7 @@ static ssize_t trans_stat_show(struct device *dev,
|
|||||||
max_state = df->max_state;
|
max_state = df->max_state;
|
||||||
|
|
||||||
if (max_state == 0)
|
if (max_state == 0)
|
||||||
return sprintf(buf, "Not Supported.\n");
|
return scnprintf(buf, PAGE_SIZE, "Not Supported.\n");
|
||||||
|
|
||||||
mutex_lock(&df->lock);
|
mutex_lock(&df->lock);
|
||||||
if (!df->stop_polling &&
|
if (!df->stop_polling &&
|
||||||
@ -1726,31 +1726,52 @@ static ssize_t trans_stat_show(struct device *dev,
|
|||||||
}
|
}
|
||||||
mutex_unlock(&df->lock);
|
mutex_unlock(&df->lock);
|
||||||
|
|
||||||
len = sprintf(buf, " From : To\n");
|
len += scnprintf(buf + len, PAGE_SIZE - len, " From : To\n");
|
||||||
len += sprintf(buf + len, " :");
|
len += scnprintf(buf + len, PAGE_SIZE - len, " :");
|
||||||
for (i = 0; i < max_state; i++)
|
for (i = 0; i < max_state; i++) {
|
||||||
len += sprintf(buf + len, "%10lu",
|
if (len >= PAGE_SIZE - 1)
|
||||||
df->freq_table[i]);
|
break;
|
||||||
|
len += scnprintf(buf + len, PAGE_SIZE - len, "%10lu",
|
||||||
|
df->freq_table[i]);
|
||||||
|
}
|
||||||
|
if (len >= PAGE_SIZE - 1)
|
||||||
|
return PAGE_SIZE - 1;
|
||||||
|
|
||||||
len += sprintf(buf + len, " time(ms)\n");
|
len += scnprintf(buf + len, PAGE_SIZE - len, " time(ms)\n");
|
||||||
|
|
||||||
for (i = 0; i < max_state; i++) {
|
for (i = 0; i < max_state; i++) {
|
||||||
|
if (len >= PAGE_SIZE - 1)
|
||||||
|
break;
|
||||||
if (df->freq_table[i] == df->previous_freq)
|
if (df->freq_table[i] == df->previous_freq)
|
||||||
len += sprintf(buf + len, "*");
|
len += scnprintf(buf + len, PAGE_SIZE - len, "*");
|
||||||
else
|
else
|
||||||
len += sprintf(buf + len, " ");
|
len += scnprintf(buf + len, PAGE_SIZE - len, " ");
|
||||||
|
if (len >= PAGE_SIZE - 1)
|
||||||
|
break;
|
||||||
|
|
||||||
len += sprintf(buf + len, "%10lu:", df->freq_table[i]);
|
len += scnprintf(buf + len, PAGE_SIZE - len, "%10lu:",
|
||||||
for (j = 0; j < max_state; j++)
|
df->freq_table[i]);
|
||||||
len += sprintf(buf + len, "%10u",
|
for (j = 0; j < max_state; j++) {
|
||||||
df->stats.trans_table[(i * max_state) + j]);
|
if (len >= PAGE_SIZE - 1)
|
||||||
|
break;
|
||||||
len += sprintf(buf + len, "%10llu\n", (u64)
|
len += scnprintf(buf + len, PAGE_SIZE - len, "%10u",
|
||||||
jiffies64_to_msecs(df->stats.time_in_state[i]));
|
df->stats.trans_table[(i * max_state) + j]);
|
||||||
|
}
|
||||||
|
if (len >= PAGE_SIZE - 1)
|
||||||
|
break;
|
||||||
|
len += scnprintf(buf + len, PAGE_SIZE - len, "%10llu\n", (u64)
|
||||||
|
jiffies64_to_msecs(df->stats.time_in_state[i]));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (len < PAGE_SIZE - 1)
|
||||||
|
len += scnprintf(buf + len, PAGE_SIZE - len, "Total transition : %u\n",
|
||||||
|
df->stats.total_trans);
|
||||||
|
|
||||||
|
if (len >= PAGE_SIZE - 1) {
|
||||||
|
pr_warn_once("devfreq transition table exceeds PAGE_SIZE. Disabling\n");
|
||||||
|
return -EFBIG;
|
||||||
}
|
}
|
||||||
|
|
||||||
len += sprintf(buf + len, "Total transition : %u\n",
|
|
||||||
df->stats.total_trans);
|
|
||||||
return len;
|
return len;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1103,6 +1103,9 @@ EXPORT_SYMBOL_GPL(dma_async_device_channel_register);
|
|||||||
static void __dma_async_device_channel_unregister(struct dma_device *device,
|
static void __dma_async_device_channel_unregister(struct dma_device *device,
|
||||||
struct dma_chan *chan)
|
struct dma_chan *chan)
|
||||||
{
|
{
|
||||||
|
if (chan->local == NULL)
|
||||||
|
return;
|
||||||
|
|
||||||
WARN_ONCE(!device->device_release && chan->client_count,
|
WARN_ONCE(!device->device_release && chan->client_count,
|
||||||
"%s called while %d clients hold a reference\n",
|
"%s called while %d clients hold a reference\n",
|
||||||
__func__, chan->client_count);
|
__func__, chan->client_count);
|
||||||
|
@ -244,6 +244,7 @@ void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
|
|||||||
void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem);
|
void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem);
|
||||||
bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
|
bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
|
||||||
struct scmi_xfer *xfer);
|
struct scmi_xfer *xfer);
|
||||||
|
bool shmem_channel_free(struct scmi_shared_mem __iomem *shmem);
|
||||||
|
|
||||||
/* declarations for message passing transports */
|
/* declarations for message passing transports */
|
||||||
struct scmi_msg_payld;
|
struct scmi_msg_payld;
|
||||||
|
@ -43,6 +43,20 @@ static void rx_callback(struct mbox_client *cl, void *m)
|
|||||||
{
|
{
|
||||||
struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl);
|
struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* An A2P IRQ is NOT valid when received while the platform still has
|
||||||
|
* the ownership of the channel, because the platform at first releases
|
||||||
|
* the SMT channel and then sends the completion interrupt.
|
||||||
|
*
|
||||||
|
* This addresses a possible race condition in which a spurious IRQ from
|
||||||
|
* a previous timed-out reply which arrived late could be wrongly
|
||||||
|
* associated with the next pending transaction.
|
||||||
|
*/
|
||||||
|
if (cl->knows_txdone && !shmem_channel_free(smbox->shmem)) {
|
||||||
|
dev_warn(smbox->cinfo->dev, "Ignoring spurious A2P IRQ !\n");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem), NULL);
|
scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem), NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -122,3 +122,9 @@ bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem,
|
|||||||
(SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR |
|
(SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR |
|
||||||
SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
|
SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool shmem_channel_free(struct scmi_shared_mem __iomem *shmem)
|
||||||
|
{
|
||||||
|
return (ioread32(&shmem->channel_status) &
|
||||||
|
SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE);
|
||||||
|
}
|
||||||
|
@ -318,20 +318,27 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
|
|||||||
switch (flow_type) {
|
switch (flow_type) {
|
||||||
case IRQ_TYPE_LEVEL_HIGH:
|
case IRQ_TYPE_LEVEL_HIGH:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_LOW:
|
case IRQ_TYPE_LEVEL_LOW:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 0);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_EDGE_RISING:
|
case IRQ_TYPE_EDGE_RISING:
|
||||||
case IRQ_TYPE_EDGE_FALLING:
|
case IRQ_TYPE_EDGE_FALLING:
|
||||||
case IRQ_TYPE_EDGE_BOTH:
|
case IRQ_TYPE_EDGE_BOTH:
|
||||||
state = sprd_eic_get(chip, offset);
|
state = sprd_eic_get(chip, offset);
|
||||||
if (state)
|
if (state) {
|
||||||
sprd_eic_update(chip, offset,
|
sprd_eic_update(chip, offset,
|
||||||
SPRD_EIC_DBNC_IEV, 0);
|
SPRD_EIC_DBNC_IEV, 0);
|
||||||
else
|
sprd_eic_update(chip, offset,
|
||||||
|
SPRD_EIC_DBNC_IC, 1);
|
||||||
|
} else {
|
||||||
sprd_eic_update(chip, offset,
|
sprd_eic_update(chip, offset,
|
||||||
SPRD_EIC_DBNC_IEV, 1);
|
SPRD_EIC_DBNC_IEV, 1);
|
||||||
|
sprd_eic_update(chip, offset,
|
||||||
|
SPRD_EIC_DBNC_IC, 1);
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return -ENOTSUPP;
|
return -ENOTSUPP;
|
||||||
@ -343,20 +350,27 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
|
|||||||
switch (flow_type) {
|
switch (flow_type) {
|
||||||
case IRQ_TYPE_LEVEL_HIGH:
|
case IRQ_TYPE_LEVEL_HIGH:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 0);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_LOW:
|
case IRQ_TYPE_LEVEL_LOW:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_EDGE_RISING:
|
case IRQ_TYPE_EDGE_RISING:
|
||||||
case IRQ_TYPE_EDGE_FALLING:
|
case IRQ_TYPE_EDGE_FALLING:
|
||||||
case IRQ_TYPE_EDGE_BOTH:
|
case IRQ_TYPE_EDGE_BOTH:
|
||||||
state = sprd_eic_get(chip, offset);
|
state = sprd_eic_get(chip, offset);
|
||||||
if (state)
|
if (state) {
|
||||||
sprd_eic_update(chip, offset,
|
sprd_eic_update(chip, offset,
|
||||||
SPRD_EIC_LATCH_INTPOL, 0);
|
SPRD_EIC_LATCH_INTPOL, 0);
|
||||||
else
|
sprd_eic_update(chip, offset,
|
||||||
|
SPRD_EIC_LATCH_INTCLR, 1);
|
||||||
|
} else {
|
||||||
sprd_eic_update(chip, offset,
|
sprd_eic_update(chip, offset,
|
||||||
SPRD_EIC_LATCH_INTPOL, 1);
|
SPRD_EIC_LATCH_INTPOL, 1);
|
||||||
|
sprd_eic_update(chip, offset,
|
||||||
|
SPRD_EIC_LATCH_INTCLR, 1);
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
return -ENOTSUPP;
|
return -ENOTSUPP;
|
||||||
@ -370,29 +384,34 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
|
|||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_edge_irq);
|
irq_set_handler_locked(data, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_EDGE_FALLING:
|
case IRQ_TYPE_EDGE_FALLING:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_edge_irq);
|
irq_set_handler_locked(data, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_EDGE_BOTH:
|
case IRQ_TYPE_EDGE_BOTH:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_edge_irq);
|
irq_set_handler_locked(data, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_HIGH:
|
case IRQ_TYPE_LEVEL_HIGH:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_level_irq);
|
irq_set_handler_locked(data, handle_level_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_LOW:
|
case IRQ_TYPE_LEVEL_LOW:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_level_irq);
|
irq_set_handler_locked(data, handle_level_irq);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@ -405,29 +424,34 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
|
|||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_edge_irq);
|
irq_set_handler_locked(data, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_EDGE_FALLING:
|
case IRQ_TYPE_EDGE_FALLING:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_edge_irq);
|
irq_set_handler_locked(data, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_EDGE_BOTH:
|
case IRQ_TYPE_EDGE_BOTH:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_edge_irq);
|
irq_set_handler_locked(data, handle_edge_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_HIGH:
|
case IRQ_TYPE_LEVEL_HIGH:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_level_irq);
|
irq_set_handler_locked(data, handle_level_irq);
|
||||||
break;
|
break;
|
||||||
case IRQ_TYPE_LEVEL_LOW:
|
case IRQ_TYPE_LEVEL_LOW:
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1);
|
||||||
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0);
|
||||||
|
sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1);
|
||||||
irq_set_handler_locked(data, handle_level_irq);
|
irq_set_handler_locked(data, handle_level_irq);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
|
@ -1626,6 +1626,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
|
|||||||
.ignore_wake = "ELAN0415:00@9",
|
.ignore_wake = "ELAN0415:00@9",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
/*
|
||||||
|
* Spurious wakeups from TP_ATTN# pin
|
||||||
|
* Found in BIOS 0.35
|
||||||
|
* https://gitlab.freedesktop.org/drm/amd/-/issues/3073
|
||||||
|
*/
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "GPD"),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "G1619-04"),
|
||||||
|
},
|
||||||
|
.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
|
||||||
|
.ignore_wake = "PNP0C50:00@8",
|
||||||
|
},
|
||||||
|
},
|
||||||
{} /* Terminating entry */
|
{} /* Terminating entry */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -2202,8 +2202,6 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
|||||||
|
|
||||||
pci_wake_from_d3(pdev, TRUE);
|
pci_wake_from_d3(pdev, TRUE);
|
||||||
|
|
||||||
pci_wake_from_d3(pdev, TRUE);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* For runpm implemented via BACO, PMFW will handle the
|
* For runpm implemented via BACO, PMFW will handle the
|
||||||
* timing for BACO in and out:
|
* timing for BACO in and out:
|
||||||
|
@ -6677,8 +6677,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
|
|||||||
if (IS_ERR(mst_state))
|
if (IS_ERR(mst_state))
|
||||||
return PTR_ERR(mst_state);
|
return PTR_ERR(mst_state);
|
||||||
|
|
||||||
if (!mst_state->pbn_div)
|
mst_state->pbn_div = dm_mst_get_pbn_divider(aconnector->mst_port->dc_link);
|
||||||
mst_state->pbn_div = dm_mst_get_pbn_divider(aconnector->mst_port->dc_link);
|
|
||||||
|
|
||||||
if (!state->duplicated) {
|
if (!state->duplicated) {
|
||||||
int max_bpc = conn_state->max_requested_bpc;
|
int max_bpc = conn_state->max_requested_bpc;
|
||||||
|
@ -131,30 +131,27 @@ static int dcn314_get_active_display_cnt_wa(
|
|||||||
return display_count;
|
return display_count;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
|
static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context,
|
||||||
|
bool safe_to_lower, bool disable)
|
||||||
{
|
{
|
||||||
struct dc *dc = clk_mgr_base->ctx->dc;
|
struct dc *dc = clk_mgr_base->ctx->dc;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
for (i = 0; i < dc->res_pool->pipe_count; ++i) {
|
for (i = 0; i < dc->res_pool->pipe_count; ++i) {
|
||||||
struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i];
|
struct pipe_ctx *pipe = safe_to_lower
|
||||||
|
? &context->res_ctx.pipe_ctx[i]
|
||||||
|
: &dc->current_state->res_ctx.pipe_ctx[i];
|
||||||
|
|
||||||
if (pipe->top_pipe || pipe->prev_odm_pipe)
|
if (pipe->top_pipe || pipe->prev_odm_pipe)
|
||||||
continue;
|
continue;
|
||||||
if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) {
|
if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) {
|
||||||
struct stream_encoder *stream_enc = pipe->stream_res.stream_enc;
|
|
||||||
|
|
||||||
if (disable) {
|
if (disable) {
|
||||||
if (stream_enc && stream_enc->funcs->disable_fifo)
|
if (pipe->stream_res.tg && pipe->stream_res.tg->funcs->immediate_disable_crtc)
|
||||||
pipe->stream_res.stream_enc->funcs->disable_fifo(stream_enc);
|
pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
|
||||||
|
|
||||||
pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg);
|
|
||||||
reset_sync_context_for_pipe(dc, context, i);
|
reset_sync_context_for_pipe(dc, context, i);
|
||||||
} else {
|
} else {
|
||||||
pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
|
pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
|
||||||
|
|
||||||
if (stream_enc && stream_enc->funcs->enable_fifo)
|
|
||||||
pipe->stream_res.stream_enc->funcs->enable_fifo(stream_enc);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -254,11 +251,11 @@ void dcn314_update_clocks(struct clk_mgr *clk_mgr_base,
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
|
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
|
||||||
dcn314_disable_otg_wa(clk_mgr_base, context, true);
|
dcn314_disable_otg_wa(clk_mgr_base, context, safe_to_lower, true);
|
||||||
|
|
||||||
clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
|
clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
|
||||||
dcn314_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
|
dcn314_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz);
|
||||||
dcn314_disable_otg_wa(clk_mgr_base, context, false);
|
dcn314_disable_otg_wa(clk_mgr_base, context, safe_to_lower, false);
|
||||||
|
|
||||||
update_dispclk = true;
|
update_dispclk = true;
|
||||||
}
|
}
|
||||||
|
@ -818,6 +818,8 @@ bool is_psr_su_specific_panel(struct dc_link *link)
|
|||||||
isPSRSUSupported = false;
|
isPSRSUSupported = false;
|
||||||
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
|
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
|
||||||
isPSRSUSupported = false;
|
isPSRSUSupported = false;
|
||||||
|
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
|
||||||
|
isPSRSUSupported = false;
|
||||||
else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
|
else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
|
||||||
isPSRSUSupported = true;
|
isPSRSUSupported = true;
|
||||||
}
|
}
|
||||||
|
@ -24,6 +24,7 @@
|
|||||||
|
|
||||||
#include <linux/firmware.h>
|
#include <linux/firmware.h>
|
||||||
#include <linux/pci.h>
|
#include <linux/pci.h>
|
||||||
|
#include <linux/power_supply.h>
|
||||||
#include <linux/reboot.h>
|
#include <linux/reboot.h>
|
||||||
|
|
||||||
#include "amdgpu.h"
|
#include "amdgpu.h"
|
||||||
@ -731,16 +732,8 @@ static int smu_late_init(void *handle)
|
|||||||
* handle the switch automatically. Driver involvement
|
* handle the switch automatically. Driver involvement
|
||||||
* is unnecessary.
|
* is unnecessary.
|
||||||
*/
|
*/
|
||||||
if (!smu->dc_controlled_by_gpio) {
|
adev->pm.ac_power = power_supply_is_system_supplied() > 0;
|
||||||
ret = smu_set_power_source(smu,
|
smu_set_ac_dc(smu);
|
||||||
adev->pm.ac_power ? SMU_POWER_SOURCE_AC :
|
|
||||||
SMU_POWER_SOURCE_DC);
|
|
||||||
if (ret) {
|
|
||||||
dev_err(adev->dev, "Failed to switch to %s mode!\n",
|
|
||||||
adev->pm.ac_power ? "AC" : "DC");
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 1)) ||
|
if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 1)) ||
|
||||||
(adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 3)))
|
(adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 3)))
|
||||||
|
@ -1467,10 +1467,12 @@ static int smu_v11_0_irq_process(struct amdgpu_device *adev,
|
|||||||
case 0x3:
|
case 0x3:
|
||||||
dev_dbg(adev->dev, "Switched to AC mode!\n");
|
dev_dbg(adev->dev, "Switched to AC mode!\n");
|
||||||
schedule_work(&smu->interrupt_work);
|
schedule_work(&smu->interrupt_work);
|
||||||
|
adev->pm.ac_power = true;
|
||||||
break;
|
break;
|
||||||
case 0x4:
|
case 0x4:
|
||||||
dev_dbg(adev->dev, "Switched to DC mode!\n");
|
dev_dbg(adev->dev, "Switched to DC mode!\n");
|
||||||
schedule_work(&smu->interrupt_work);
|
schedule_work(&smu->interrupt_work);
|
||||||
|
adev->pm.ac_power = false;
|
||||||
break;
|
break;
|
||||||
case 0x7:
|
case 0x7:
|
||||||
/*
|
/*
|
||||||
|
@ -1415,10 +1415,12 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
|
|||||||
case 0x3:
|
case 0x3:
|
||||||
dev_dbg(adev->dev, "Switched to AC mode!\n");
|
dev_dbg(adev->dev, "Switched to AC mode!\n");
|
||||||
smu_v13_0_ack_ac_dc_interrupt(smu);
|
smu_v13_0_ack_ac_dc_interrupt(smu);
|
||||||
|
adev->pm.ac_power = true;
|
||||||
break;
|
break;
|
||||||
case 0x4:
|
case 0x4:
|
||||||
dev_dbg(adev->dev, "Switched to DC mode!\n");
|
dev_dbg(adev->dev, "Switched to DC mode!\n");
|
||||||
smu_v13_0_ack_ac_dc_interrupt(smu);
|
smu_v13_0_ack_ac_dc_interrupt(smu);
|
||||||
|
adev->pm.ac_power = false;
|
||||||
break;
|
break;
|
||||||
case 0x7:
|
case 0x7:
|
||||||
/*
|
/*
|
||||||
|
@ -1742,6 +1742,7 @@ static ssize_t anx7625_aux_transfer(struct drm_dp_aux *aux,
|
|||||||
u8 request = msg->request & ~DP_AUX_I2C_MOT;
|
u8 request = msg->request & ~DP_AUX_I2C_MOT;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
|
mutex_lock(&ctx->aux_lock);
|
||||||
pm_runtime_get_sync(dev);
|
pm_runtime_get_sync(dev);
|
||||||
msg->reply = 0;
|
msg->reply = 0;
|
||||||
switch (request) {
|
switch (request) {
|
||||||
@ -1758,6 +1759,7 @@ static ssize_t anx7625_aux_transfer(struct drm_dp_aux *aux,
|
|||||||
msg->size, msg->buffer);
|
msg->size, msg->buffer);
|
||||||
pm_runtime_mark_last_busy(dev);
|
pm_runtime_mark_last_busy(dev);
|
||||||
pm_runtime_put_autosuspend(dev);
|
pm_runtime_put_autosuspend(dev);
|
||||||
|
mutex_unlock(&ctx->aux_lock);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@ -2454,7 +2456,9 @@ static void anx7625_bridge_atomic_disable(struct drm_bridge *bridge,
|
|||||||
ctx->connector = NULL;
|
ctx->connector = NULL;
|
||||||
anx7625_dp_stop(ctx);
|
anx7625_dp_stop(ctx);
|
||||||
|
|
||||||
pm_runtime_put_sync(dev);
|
mutex_lock(&ctx->aux_lock);
|
||||||
|
pm_runtime_put_sync_suspend(dev);
|
||||||
|
mutex_unlock(&ctx->aux_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static enum drm_connector_status
|
static enum drm_connector_status
|
||||||
@ -2648,6 +2652,7 @@ static int anx7625_i2c_probe(struct i2c_client *client)
|
|||||||
|
|
||||||
mutex_init(&platform->lock);
|
mutex_init(&platform->lock);
|
||||||
mutex_init(&platform->hdcp_wq_lock);
|
mutex_init(&platform->hdcp_wq_lock);
|
||||||
|
mutex_init(&platform->aux_lock);
|
||||||
|
|
||||||
INIT_DELAYED_WORK(&platform->hdcp_work, hdcp_check_work_func);
|
INIT_DELAYED_WORK(&platform->hdcp_work, hdcp_check_work_func);
|
||||||
platform->hdcp_workqueue = create_workqueue("hdcp workqueue");
|
platform->hdcp_workqueue = create_workqueue("hdcp workqueue");
|
||||||
|
@ -471,6 +471,8 @@ struct anx7625_data {
|
|||||||
struct workqueue_struct *hdcp_workqueue;
|
struct workqueue_struct *hdcp_workqueue;
|
||||||
/* Lock for hdcp work queue */
|
/* Lock for hdcp work queue */
|
||||||
struct mutex hdcp_wq_lock;
|
struct mutex hdcp_wq_lock;
|
||||||
|
/* Lock for aux transfer and disable */
|
||||||
|
struct mutex aux_lock;
|
||||||
char edid_block;
|
char edid_block;
|
||||||
struct display_timing dt;
|
struct display_timing dt;
|
||||||
u8 display_timing_valid;
|
u8 display_timing_valid;
|
||||||
|
@ -54,13 +54,13 @@ static int ptn3460_read_bytes(struct ptn3460_bridge *ptn_bridge, char addr,
|
|||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = i2c_master_send(ptn_bridge->client, &addr, 1);
|
ret = i2c_master_send(ptn_bridge->client, &addr, 1);
|
||||||
if (ret <= 0) {
|
if (ret < 0) {
|
||||||
DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
|
DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = i2c_master_recv(ptn_bridge->client, buf, len);
|
ret = i2c_master_recv(ptn_bridge->client, buf, len);
|
||||||
if (ret <= 0) {
|
if (ret < 0) {
|
||||||
DRM_ERROR("Failed to recv i2c data, ret=%d\n", ret);
|
DRM_ERROR("Failed to recv i2c data, ret=%d\n", ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@ -78,7 +78,7 @@ static int ptn3460_write_byte(struct ptn3460_bridge *ptn_bridge, char addr,
|
|||||||
buf[1] = val;
|
buf[1] = val;
|
||||||
|
|
||||||
ret = i2c_master_send(ptn_bridge->client, buf, ARRAY_SIZE(buf));
|
ret = i2c_master_send(ptn_bridge->client, buf, ARRAY_SIZE(buf));
|
||||||
if (ret <= 0) {
|
if (ret < 0) {
|
||||||
DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
|
DRM_ERROR("Failed to send i2c command, ret=%d\n", ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -106,6 +106,7 @@ struct ps8640 {
|
|||||||
struct device_link *link;
|
struct device_link *link;
|
||||||
bool pre_enabled;
|
bool pre_enabled;
|
||||||
bool need_post_hpd_delay;
|
bool need_post_hpd_delay;
|
||||||
|
struct mutex aux_lock;
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct regmap_config ps8640_regmap_config[] = {
|
static const struct regmap_config ps8640_regmap_config[] = {
|
||||||
@ -353,11 +354,20 @@ static ssize_t ps8640_aux_transfer(struct drm_dp_aux *aux,
|
|||||||
struct device *dev = &ps_bridge->page[PAGE0_DP_CNTL]->dev;
|
struct device *dev = &ps_bridge->page[PAGE0_DP_CNTL]->dev;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
mutex_lock(&ps_bridge->aux_lock);
|
||||||
pm_runtime_get_sync(dev);
|
pm_runtime_get_sync(dev);
|
||||||
|
ret = _ps8640_wait_hpd_asserted(ps_bridge, 200 * 1000);
|
||||||
|
if (ret) {
|
||||||
|
pm_runtime_put_sync_suspend(dev);
|
||||||
|
goto exit;
|
||||||
|
}
|
||||||
ret = ps8640_aux_transfer_msg(aux, msg);
|
ret = ps8640_aux_transfer_msg(aux, msg);
|
||||||
pm_runtime_mark_last_busy(dev);
|
pm_runtime_mark_last_busy(dev);
|
||||||
pm_runtime_put_autosuspend(dev);
|
pm_runtime_put_autosuspend(dev);
|
||||||
|
|
||||||
|
exit:
|
||||||
|
mutex_unlock(&ps_bridge->aux_lock);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -476,7 +486,18 @@ static void ps8640_post_disable(struct drm_bridge *bridge)
|
|||||||
ps_bridge->pre_enabled = false;
|
ps_bridge->pre_enabled = false;
|
||||||
|
|
||||||
ps8640_bridge_vdo_control(ps_bridge, DISABLE);
|
ps8640_bridge_vdo_control(ps_bridge, DISABLE);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The bridge seems to expect everything to be power cycled at the
|
||||||
|
* disable process, so grab a lock here to make sure
|
||||||
|
* ps8640_aux_transfer() is not holding a runtime PM reference and
|
||||||
|
* preventing the bridge from suspend.
|
||||||
|
*/
|
||||||
|
mutex_lock(&ps_bridge->aux_lock);
|
||||||
|
|
||||||
pm_runtime_put_sync_suspend(&ps_bridge->page[PAGE0_DP_CNTL]->dev);
|
pm_runtime_put_sync_suspend(&ps_bridge->page[PAGE0_DP_CNTL]->dev);
|
||||||
|
|
||||||
|
mutex_unlock(&ps_bridge->aux_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int ps8640_bridge_attach(struct drm_bridge *bridge,
|
static int ps8640_bridge_attach(struct drm_bridge *bridge,
|
||||||
@ -657,6 +678,8 @@ static int ps8640_probe(struct i2c_client *client)
|
|||||||
if (!ps_bridge)
|
if (!ps_bridge)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
mutex_init(&ps_bridge->aux_lock);
|
||||||
|
|
||||||
ps_bridge->supplies[0].supply = "vdd12";
|
ps_bridge->supplies[0].supply = "vdd12";
|
||||||
ps_bridge->supplies[1].supply = "vdd33";
|
ps_bridge->supplies[1].supply = "vdd33";
|
||||||
ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ps_bridge->supplies),
|
ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ps_bridge->supplies),
|
||||||
|
@ -171,7 +171,6 @@ struct sii902x {
|
|||||||
struct drm_connector connector;
|
struct drm_connector connector;
|
||||||
struct gpio_desc *reset_gpio;
|
struct gpio_desc *reset_gpio;
|
||||||
struct i2c_mux_core *i2cmux;
|
struct i2c_mux_core *i2cmux;
|
||||||
struct regulator_bulk_data supplies[2];
|
|
||||||
bool sink_is_hdmi;
|
bool sink_is_hdmi;
|
||||||
/*
|
/*
|
||||||
* Mutex protects audio and video functions from interfering
|
* Mutex protects audio and video functions from interfering
|
||||||
@ -1041,6 +1040,26 @@ static int sii902x_init(struct sii902x *sii902x)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ret = sii902x_audio_codec_init(sii902x, dev);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
i2c_set_clientdata(sii902x->i2c, sii902x);
|
||||||
|
|
||||||
|
sii902x->i2cmux = i2c_mux_alloc(sii902x->i2c->adapter, dev,
|
||||||
|
1, 0, I2C_MUX_GATE,
|
||||||
|
sii902x_i2c_bypass_select,
|
||||||
|
sii902x_i2c_bypass_deselect);
|
||||||
|
if (!sii902x->i2cmux) {
|
||||||
|
ret = -ENOMEM;
|
||||||
|
goto err_unreg_audio;
|
||||||
|
}
|
||||||
|
|
||||||
|
sii902x->i2cmux->priv = sii902x;
|
||||||
|
ret = i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0);
|
||||||
|
if (ret)
|
||||||
|
goto err_unreg_audio;
|
||||||
|
|
||||||
sii902x->bridge.funcs = &sii902x_bridge_funcs;
|
sii902x->bridge.funcs = &sii902x_bridge_funcs;
|
||||||
sii902x->bridge.of_node = dev->of_node;
|
sii902x->bridge.of_node = dev->of_node;
|
||||||
sii902x->bridge.timings = &default_sii902x_timings;
|
sii902x->bridge.timings = &default_sii902x_timings;
|
||||||
@ -1051,19 +1070,13 @@ static int sii902x_init(struct sii902x *sii902x)
|
|||||||
|
|
||||||
drm_bridge_add(&sii902x->bridge);
|
drm_bridge_add(&sii902x->bridge);
|
||||||
|
|
||||||
sii902x_audio_codec_init(sii902x, dev);
|
return 0;
|
||||||
|
|
||||||
i2c_set_clientdata(sii902x->i2c, sii902x);
|
err_unreg_audio:
|
||||||
|
if (!PTR_ERR_OR_ZERO(sii902x->audio.pdev))
|
||||||
|
platform_device_unregister(sii902x->audio.pdev);
|
||||||
|
|
||||||
sii902x->i2cmux = i2c_mux_alloc(sii902x->i2c->adapter, dev,
|
return ret;
|
||||||
1, 0, I2C_MUX_GATE,
|
|
||||||
sii902x_i2c_bypass_select,
|
|
||||||
sii902x_i2c_bypass_deselect);
|
|
||||||
if (!sii902x->i2cmux)
|
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
sii902x->i2cmux->priv = sii902x;
|
|
||||||
return i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int sii902x_probe(struct i2c_client *client,
|
static int sii902x_probe(struct i2c_client *client,
|
||||||
@ -1072,6 +1085,7 @@ static int sii902x_probe(struct i2c_client *client,
|
|||||||
struct device *dev = &client->dev;
|
struct device *dev = &client->dev;
|
||||||
struct device_node *endpoint;
|
struct device_node *endpoint;
|
||||||
struct sii902x *sii902x;
|
struct sii902x *sii902x;
|
||||||
|
static const char * const supplies[] = {"iovcc", "cvcc12"};
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = i2c_check_functionality(client->adapter,
|
ret = i2c_check_functionality(client->adapter,
|
||||||
@ -1122,38 +1136,22 @@ static int sii902x_probe(struct i2c_client *client,
|
|||||||
|
|
||||||
mutex_init(&sii902x->mutex);
|
mutex_init(&sii902x->mutex);
|
||||||
|
|
||||||
sii902x->supplies[0].supply = "iovcc";
|
ret = devm_regulator_bulk_get_enable(dev, ARRAY_SIZE(supplies), supplies);
|
||||||
sii902x->supplies[1].supply = "cvcc12";
|
|
||||||
ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(sii902x->supplies),
|
|
||||||
sii902x->supplies);
|
|
||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
return ret;
|
return dev_err_probe(dev, ret, "Failed to enable supplies");
|
||||||
|
|
||||||
ret = regulator_bulk_enable(ARRAY_SIZE(sii902x->supplies),
|
return sii902x_init(sii902x);
|
||||||
sii902x->supplies);
|
|
||||||
if (ret < 0) {
|
|
||||||
dev_err_probe(dev, ret, "Failed to enable supplies");
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = sii902x_init(sii902x);
|
|
||||||
if (ret < 0) {
|
|
||||||
regulator_bulk_disable(ARRAY_SIZE(sii902x->supplies),
|
|
||||||
sii902x->supplies);
|
|
||||||
}
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void sii902x_remove(struct i2c_client *client)
|
static void sii902x_remove(struct i2c_client *client)
|
||||||
|
|
||||||
{
|
{
|
||||||
struct sii902x *sii902x = i2c_get_clientdata(client);
|
struct sii902x *sii902x = i2c_get_clientdata(client);
|
||||||
|
|
||||||
i2c_mux_del_adapters(sii902x->i2cmux);
|
|
||||||
drm_bridge_remove(&sii902x->bridge);
|
drm_bridge_remove(&sii902x->bridge);
|
||||||
regulator_bulk_disable(ARRAY_SIZE(sii902x->supplies),
|
i2c_mux_del_adapters(sii902x->i2cmux);
|
||||||
sii902x->supplies);
|
|
||||||
|
if (!PTR_ERR_OR_ZERO(sii902x->audio.pdev))
|
||||||
|
platform_device_unregister(sii902x->audio.pdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct of_device_id sii902x_dt_ids[] = {
|
static const struct of_device_id sii902x_dt_ids[] = {
|
||||||
|
@ -1382,6 +1382,7 @@ int drm_mode_page_flip_ioctl(struct drm_device *dev,
|
|||||||
out:
|
out:
|
||||||
if (fb)
|
if (fb)
|
||||||
drm_framebuffer_put(fb);
|
drm_framebuffer_put(fb);
|
||||||
|
fb = NULL;
|
||||||
if (plane->old_fb)
|
if (plane->old_fb)
|
||||||
drm_framebuffer_put(plane->old_fb);
|
drm_framebuffer_put(plane->old_fb);
|
||||||
plane->old_fb = NULL;
|
plane->old_fb = NULL;
|
||||||
|
@ -319,9 +319,9 @@ static void decon_win_set_bldmod(struct decon_context *ctx, unsigned int win,
|
|||||||
static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
|
static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,
|
||||||
struct drm_framebuffer *fb)
|
struct drm_framebuffer *fb)
|
||||||
{
|
{
|
||||||
struct exynos_drm_plane plane = ctx->planes[win];
|
struct exynos_drm_plane *plane = &ctx->planes[win];
|
||||||
struct exynos_drm_plane_state *state =
|
struct exynos_drm_plane_state *state =
|
||||||
to_exynos_plane_state(plane.base.state);
|
to_exynos_plane_state(plane->base.state);
|
||||||
unsigned int alpha = state->base.alpha;
|
unsigned int alpha = state->base.alpha;
|
||||||
unsigned int pixel_alpha;
|
unsigned int pixel_alpha;
|
||||||
unsigned long val;
|
unsigned long val;
|
||||||
|
@ -662,9 +662,9 @@ static void fimd_win_set_bldmod(struct fimd_context *ctx, unsigned int win,
|
|||||||
static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win,
|
static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win,
|
||||||
struct drm_framebuffer *fb, int width)
|
struct drm_framebuffer *fb, int width)
|
||||||
{
|
{
|
||||||
struct exynos_drm_plane plane = ctx->planes[win];
|
struct exynos_drm_plane *plane = &ctx->planes[win];
|
||||||
struct exynos_drm_plane_state *state =
|
struct exynos_drm_plane_state *state =
|
||||||
to_exynos_plane_state(plane.base.state);
|
to_exynos_plane_state(plane->base.state);
|
||||||
uint32_t pixel_format = fb->format->format;
|
uint32_t pixel_format = fb->format->format;
|
||||||
unsigned int alpha = state->base.alpha;
|
unsigned int alpha = state->base.alpha;
|
||||||
u32 val = WINCONx_ENWIN;
|
u32 val = WINCONx_ENWIN;
|
||||||
|
@ -1342,7 +1342,7 @@ static int __maybe_unused gsc_runtime_resume(struct device *dev)
|
|||||||
for (i = 0; i < ctx->num_clocks; i++) {
|
for (i = 0; i < ctx->num_clocks; i++) {
|
||||||
ret = clk_prepare_enable(ctx->clocks[i]);
|
ret = clk_prepare_enable(ctx->clocks[i]);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
while (--i > 0)
|
while (--i >= 0)
|
||||||
clk_disable_unprepare(ctx->clocks[i]);
|
clk_disable_unprepare(ctx->clocks[i]);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -108,6 +108,9 @@ nouveau_vma_new(struct nouveau_bo *nvbo, struct nouveau_vmm *vmm,
|
|||||||
} else {
|
} else {
|
||||||
ret = nvif_vmm_get(&vmm->vmm, PTES, false, mem->mem.page, 0,
|
ret = nvif_vmm_get(&vmm->vmm, PTES, false, mem->mem.page, 0,
|
||||||
mem->mem.size, &tmp);
|
mem->mem.size, &tmp);
|
||||||
|
if (ret)
|
||||||
|
goto done;
|
||||||
|
|
||||||
vma->addr = tmp.addr;
|
vma->addr = tmp.addr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -975,6 +975,8 @@ static const struct panel_desc auo_b116xak01 = {
|
|||||||
},
|
},
|
||||||
.delay = {
|
.delay = {
|
||||||
.hpd_absent = 200,
|
.hpd_absent = 200,
|
||||||
|
.unprepare = 500,
|
||||||
|
.enable = 50,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -1870,7 +1872,7 @@ static const struct edp_panel_entry edp_panels[] = {
|
|||||||
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1062, &delay_200_500_e50, "B120XAN01.0"),
|
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1062, &delay_200_500_e50, "B120XAN01.0"),
|
||||||
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1e9b, &delay_200_500_e50, "B133UAN02.1"),
|
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1e9b, &delay_200_500_e50, "B133UAN02.1"),
|
||||||
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1ea5, &delay_200_500_e50, "B116XAK01.6"),
|
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1ea5, &delay_200_500_e50, "B116XAK01.6"),
|
||||||
EDP_PANEL_ENTRY('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01"),
|
EDP_PANEL_ENTRY('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01.0"),
|
||||||
EDP_PANEL_ENTRY('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1"),
|
EDP_PANEL_ENTRY('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1"),
|
||||||
EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"),
|
EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"),
|
||||||
|
|
||||||
|
@ -3603,6 +3603,7 @@ static const struct panel_desc tianma_tm070jdhg30 = {
|
|||||||
},
|
},
|
||||||
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
||||||
.connector_type = DRM_MODE_CONNECTOR_LVDS,
|
.connector_type = DRM_MODE_CONNECTOR_LVDS,
|
||||||
|
.bus_flags = DRM_BUS_FLAG_DE_HIGH,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct panel_desc tianma_tm070jvhg33 = {
|
static const struct panel_desc tianma_tm070jvhg33 = {
|
||||||
@ -3615,6 +3616,7 @@ static const struct panel_desc tianma_tm070jvhg33 = {
|
|||||||
},
|
},
|
||||||
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
|
||||||
.connector_type = DRM_MODE_CONNECTOR_LVDS,
|
.connector_type = DRM_MODE_CONNECTOR_LVDS,
|
||||||
|
.bus_flags = DRM_BUS_FLAG_DE_HIGH,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct display_timing tianma_tm070rvhg71_timing = {
|
static const struct display_timing tianma_tm070rvhg71_timing = {
|
||||||
|
@ -170,13 +170,13 @@ static void tidss_crtc_atomic_flush(struct drm_crtc *crtc,
|
|||||||
struct tidss_device *tidss = to_tidss(ddev);
|
struct tidss_device *tidss = to_tidss(ddev);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
|
||||||
dev_dbg(ddev->dev,
|
dev_dbg(ddev->dev, "%s: %s is %sactive, %s modeset, event %p\n",
|
||||||
"%s: %s enabled %d, needs modeset %d, event %p\n", __func__,
|
__func__, crtc->name, crtc->state->active ? "" : "not ",
|
||||||
crtc->name, drm_atomic_crtc_needs_modeset(crtc->state),
|
drm_atomic_crtc_needs_modeset(crtc->state) ? "needs" : "doesn't need",
|
||||||
crtc->state->enable, crtc->state->event);
|
crtc->state->event);
|
||||||
|
|
||||||
/* There is nothing to do if CRTC is not going to be enabled. */
|
/* There is nothing to do if CRTC is not going to be enabled. */
|
||||||
if (!crtc->state->enable)
|
if (!crtc->state->active)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -6,6 +6,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
#include <linux/bitops.h>
|
#include <linux/bitops.h>
|
||||||
|
#include <linux/bitfield.h>
|
||||||
#include <linux/iio/events.h>
|
#include <linux/iio/events.h>
|
||||||
#include <linux/iio/iio.h>
|
#include <linux/iio/iio.h>
|
||||||
#include <linux/interrupt.h>
|
#include <linux/interrupt.h>
|
||||||
@ -28,6 +29,7 @@
|
|||||||
#define AD7091R_REG_RESULT_CONV_RESULT(x) ((x) & 0xfff)
|
#define AD7091R_REG_RESULT_CONV_RESULT(x) ((x) & 0xfff)
|
||||||
|
|
||||||
/* AD7091R_REG_CONF */
|
/* AD7091R_REG_CONF */
|
||||||
|
#define AD7091R_REG_CONF_ALERT_EN BIT(4)
|
||||||
#define AD7091R_REG_CONF_AUTO BIT(8)
|
#define AD7091R_REG_CONF_AUTO BIT(8)
|
||||||
#define AD7091R_REG_CONF_CMD BIT(10)
|
#define AD7091R_REG_CONF_CMD BIT(10)
|
||||||
|
|
||||||
@ -49,6 +51,27 @@ struct ad7091r_state {
|
|||||||
struct mutex lock; /*lock to prevent concurent reads */
|
struct mutex lock; /*lock to prevent concurent reads */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
const struct iio_event_spec ad7091r_events[] = {
|
||||||
|
{
|
||||||
|
.type = IIO_EV_TYPE_THRESH,
|
||||||
|
.dir = IIO_EV_DIR_RISING,
|
||||||
|
.mask_separate = BIT(IIO_EV_INFO_VALUE) |
|
||||||
|
BIT(IIO_EV_INFO_ENABLE),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.type = IIO_EV_TYPE_THRESH,
|
||||||
|
.dir = IIO_EV_DIR_FALLING,
|
||||||
|
.mask_separate = BIT(IIO_EV_INFO_VALUE) |
|
||||||
|
BIT(IIO_EV_INFO_ENABLE),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.type = IIO_EV_TYPE_THRESH,
|
||||||
|
.dir = IIO_EV_DIR_EITHER,
|
||||||
|
.mask_separate = BIT(IIO_EV_INFO_HYSTERESIS),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
EXPORT_SYMBOL_NS_GPL(ad7091r_events, IIO_AD7091R);
|
||||||
|
|
||||||
static int ad7091r_set_mode(struct ad7091r_state *st, enum ad7091r_mode mode)
|
static int ad7091r_set_mode(struct ad7091r_state *st, enum ad7091r_mode mode)
|
||||||
{
|
{
|
||||||
int ret, conf;
|
int ret, conf;
|
||||||
@ -168,8 +191,142 @@ static int ad7091r_read_raw(struct iio_dev *iio_dev,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int ad7091r_read_event_config(struct iio_dev *indio_dev,
|
||||||
|
const struct iio_chan_spec *chan,
|
||||||
|
enum iio_event_type type,
|
||||||
|
enum iio_event_direction dir)
|
||||||
|
{
|
||||||
|
struct ad7091r_state *st = iio_priv(indio_dev);
|
||||||
|
int val, ret;
|
||||||
|
|
||||||
|
switch (dir) {
|
||||||
|
case IIO_EV_DIR_RISING:
|
||||||
|
ret = regmap_read(st->map,
|
||||||
|
AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
|
||||||
|
&val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
return val != AD7091R_HIGH_LIMIT;
|
||||||
|
case IIO_EV_DIR_FALLING:
|
||||||
|
ret = regmap_read(st->map,
|
||||||
|
AD7091R_REG_CH_LOW_LIMIT(chan->channel),
|
||||||
|
&val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
return val != AD7091R_LOW_LIMIT;
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ad7091r_write_event_config(struct iio_dev *indio_dev,
|
||||||
|
const struct iio_chan_spec *chan,
|
||||||
|
enum iio_event_type type,
|
||||||
|
enum iio_event_direction dir, int state)
|
||||||
|
{
|
||||||
|
struct ad7091r_state *st = iio_priv(indio_dev);
|
||||||
|
|
||||||
|
if (state) {
|
||||||
|
return regmap_set_bits(st->map, AD7091R_REG_CONF,
|
||||||
|
AD7091R_REG_CONF_ALERT_EN);
|
||||||
|
} else {
|
||||||
|
/*
|
||||||
|
* Set thresholds either to 0 or to 2^12 - 1 as appropriate to
|
||||||
|
* prevent alerts and thus disable event generation.
|
||||||
|
*/
|
||||||
|
switch (dir) {
|
||||||
|
case IIO_EV_DIR_RISING:
|
||||||
|
return regmap_write(st->map,
|
||||||
|
AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
|
||||||
|
AD7091R_HIGH_LIMIT);
|
||||||
|
case IIO_EV_DIR_FALLING:
|
||||||
|
return regmap_write(st->map,
|
||||||
|
AD7091R_REG_CH_LOW_LIMIT(chan->channel),
|
||||||
|
AD7091R_LOW_LIMIT);
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ad7091r_read_event_value(struct iio_dev *indio_dev,
|
||||||
|
const struct iio_chan_spec *chan,
|
||||||
|
enum iio_event_type type,
|
||||||
|
enum iio_event_direction dir,
|
||||||
|
enum iio_event_info info, int *val, int *val2)
|
||||||
|
{
|
||||||
|
struct ad7091r_state *st = iio_priv(indio_dev);
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
switch (info) {
|
||||||
|
case IIO_EV_INFO_VALUE:
|
||||||
|
switch (dir) {
|
||||||
|
case IIO_EV_DIR_RISING:
|
||||||
|
ret = regmap_read(st->map,
|
||||||
|
AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
|
||||||
|
val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
return IIO_VAL_INT;
|
||||||
|
case IIO_EV_DIR_FALLING:
|
||||||
|
ret = regmap_read(st->map,
|
||||||
|
AD7091R_REG_CH_LOW_LIMIT(chan->channel),
|
||||||
|
val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
return IIO_VAL_INT;
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
case IIO_EV_INFO_HYSTERESIS:
|
||||||
|
ret = regmap_read(st->map,
|
||||||
|
AD7091R_REG_CH_HYSTERESIS(chan->channel),
|
||||||
|
val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
return IIO_VAL_INT;
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static int ad7091r_write_event_value(struct iio_dev *indio_dev,
|
||||||
|
const struct iio_chan_spec *chan,
|
||||||
|
enum iio_event_type type,
|
||||||
|
enum iio_event_direction dir,
|
||||||
|
enum iio_event_info info, int val, int val2)
|
||||||
|
{
|
||||||
|
struct ad7091r_state *st = iio_priv(indio_dev);
|
||||||
|
|
||||||
|
switch (info) {
|
||||||
|
case IIO_EV_INFO_VALUE:
|
||||||
|
switch (dir) {
|
||||||
|
case IIO_EV_DIR_RISING:
|
||||||
|
return regmap_write(st->map,
|
||||||
|
AD7091R_REG_CH_HIGH_LIMIT(chan->channel),
|
||||||
|
val);
|
||||||
|
case IIO_EV_DIR_FALLING:
|
||||||
|
return regmap_write(st->map,
|
||||||
|
AD7091R_REG_CH_LOW_LIMIT(chan->channel),
|
||||||
|
val);
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
case IIO_EV_INFO_HYSTERESIS:
|
||||||
|
return regmap_write(st->map,
|
||||||
|
AD7091R_REG_CH_HYSTERESIS(chan->channel),
|
||||||
|
val);
|
||||||
|
default:
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static const struct iio_info ad7091r_info = {
|
static const struct iio_info ad7091r_info = {
|
||||||
.read_raw = ad7091r_read_raw,
|
.read_raw = ad7091r_read_raw,
|
||||||
|
.read_event_config = &ad7091r_read_event_config,
|
||||||
|
.write_event_config = &ad7091r_write_event_config,
|
||||||
|
.read_event_value = &ad7091r_read_event_value,
|
||||||
|
.write_event_value = &ad7091r_write_event_value,
|
||||||
};
|
};
|
||||||
|
|
||||||
static irqreturn_t ad7091r_event_handler(int irq, void *private)
|
static irqreturn_t ad7091r_event_handler(int irq, void *private)
|
||||||
@ -232,6 +389,11 @@ int ad7091r_probe(struct device *dev, const char *name,
|
|||||||
iio_dev->channels = chip_info->channels;
|
iio_dev->channels = chip_info->channels;
|
||||||
|
|
||||||
if (irq) {
|
if (irq) {
|
||||||
|
ret = regmap_update_bits(st->map, AD7091R_REG_CONF,
|
||||||
|
AD7091R_REG_CONF_ALERT_EN, BIT(4));
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
ret = devm_request_threaded_irq(dev, irq, NULL,
|
ret = devm_request_threaded_irq(dev, irq, NULL,
|
||||||
ad7091r_event_handler,
|
ad7091r_event_handler,
|
||||||
IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, iio_dev);
|
IRQF_TRIGGER_FALLING | IRQF_ONESHOT, name, iio_dev);
|
||||||
@ -243,7 +405,14 @@ int ad7091r_probe(struct device *dev, const char *name,
|
|||||||
if (IS_ERR(st->vref)) {
|
if (IS_ERR(st->vref)) {
|
||||||
if (PTR_ERR(st->vref) == -EPROBE_DEFER)
|
if (PTR_ERR(st->vref) == -EPROBE_DEFER)
|
||||||
return -EPROBE_DEFER;
|
return -EPROBE_DEFER;
|
||||||
|
|
||||||
st->vref = NULL;
|
st->vref = NULL;
|
||||||
|
/* Enable internal vref */
|
||||||
|
ret = regmap_set_bits(st->map, AD7091R_REG_CONF,
|
||||||
|
AD7091R_REG_CONF_INT_VREF);
|
||||||
|
if (ret)
|
||||||
|
return dev_err_probe(st->dev, ret,
|
||||||
|
"Error on enable internal reference\n");
|
||||||
} else {
|
} else {
|
||||||
ret = regulator_enable(st->vref);
|
ret = regulator_enable(st->vref);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
@ -8,6 +8,12 @@
|
|||||||
#ifndef __DRIVERS_IIO_ADC_AD7091R_BASE_H__
|
#ifndef __DRIVERS_IIO_ADC_AD7091R_BASE_H__
|
||||||
#define __DRIVERS_IIO_ADC_AD7091R_BASE_H__
|
#define __DRIVERS_IIO_ADC_AD7091R_BASE_H__
|
||||||
|
|
||||||
|
#define AD7091R_REG_CONF_INT_VREF BIT(0)
|
||||||
|
|
||||||
|
/* AD7091R_REG_CH_LIMIT */
|
||||||
|
#define AD7091R_HIGH_LIMIT 0xFFF
|
||||||
|
#define AD7091R_LOW_LIMIT 0x0
|
||||||
|
|
||||||
struct device;
|
struct device;
|
||||||
struct ad7091r_state;
|
struct ad7091r_state;
|
||||||
|
|
||||||
@ -17,6 +23,8 @@ struct ad7091r_chip_info {
|
|||||||
unsigned int vref_mV;
|
unsigned int vref_mV;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
extern const struct iio_event_spec ad7091r_events[3];
|
||||||
|
|
||||||
extern const struct regmap_config ad7091r_regmap_config;
|
extern const struct regmap_config ad7091r_regmap_config;
|
||||||
|
|
||||||
int ad7091r_probe(struct device *dev, const char *name,
|
int ad7091r_probe(struct device *dev, const char *name,
|
||||||
|
@ -12,26 +12,6 @@
|
|||||||
|
|
||||||
#include "ad7091r-base.h"
|
#include "ad7091r-base.h"
|
||||||
|
|
||||||
static const struct iio_event_spec ad7091r5_events[] = {
|
|
||||||
{
|
|
||||||
.type = IIO_EV_TYPE_THRESH,
|
|
||||||
.dir = IIO_EV_DIR_RISING,
|
|
||||||
.mask_separate = BIT(IIO_EV_INFO_VALUE) |
|
|
||||||
BIT(IIO_EV_INFO_ENABLE),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
.type = IIO_EV_TYPE_THRESH,
|
|
||||||
.dir = IIO_EV_DIR_FALLING,
|
|
||||||
.mask_separate = BIT(IIO_EV_INFO_VALUE) |
|
|
||||||
BIT(IIO_EV_INFO_ENABLE),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
.type = IIO_EV_TYPE_THRESH,
|
|
||||||
.dir = IIO_EV_DIR_EITHER,
|
|
||||||
.mask_separate = BIT(IIO_EV_INFO_HYSTERESIS),
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
#define AD7091R_CHANNEL(idx, bits, ev, num_ev) { \
|
#define AD7091R_CHANNEL(idx, bits, ev, num_ev) { \
|
||||||
.type = IIO_VOLTAGE, \
|
.type = IIO_VOLTAGE, \
|
||||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
|
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
|
||||||
@ -44,10 +24,10 @@ static const struct iio_event_spec ad7091r5_events[] = {
|
|||||||
.scan_type.realbits = bits, \
|
.scan_type.realbits = bits, \
|
||||||
}
|
}
|
||||||
static const struct iio_chan_spec ad7091r5_channels_irq[] = {
|
static const struct iio_chan_spec ad7091r5_channels_irq[] = {
|
||||||
AD7091R_CHANNEL(0, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
|
AD7091R_CHANNEL(0, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
|
||||||
AD7091R_CHANNEL(1, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
|
AD7091R_CHANNEL(1, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
|
||||||
AD7091R_CHANNEL(2, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
|
AD7091R_CHANNEL(2, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
|
||||||
AD7091R_CHANNEL(3, 12, ad7091r5_events, ARRAY_SIZE(ad7091r5_events)),
|
AD7091R_CHANNEL(3, 12, ad7091r_events, ARRAY_SIZE(ad7091r_events)),
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct iio_chan_spec ad7091r5_channels_noirq[] = {
|
static const struct iio_chan_spec ad7091r5_channels_noirq[] = {
|
||||||
|
@ -494,9 +494,15 @@ vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
|
|||||||
static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf,
|
static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf,
|
||||||
struct iosys_map *map)
|
struct iosys_map *map)
|
||||||
{
|
{
|
||||||
struct vb2_dma_sg_buf *buf = dbuf->priv;
|
struct vb2_dma_sg_buf *buf;
|
||||||
|
void *vaddr;
|
||||||
|
|
||||||
iosys_map_set_vaddr(map, buf->vaddr);
|
buf = dbuf->priv;
|
||||||
|
vaddr = vb2_dma_sg_vaddr(buf->vb, buf);
|
||||||
|
if (!vaddr)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
iosys_map_set_vaddr(map, vaddr);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1784,10 +1784,6 @@ static int imx355_probe(struct i2c_client *client)
|
|||||||
goto error_handler_free;
|
goto error_handler_free;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = v4l2_async_register_subdev_sensor(&imx355->sd);
|
|
||||||
if (ret < 0)
|
|
||||||
goto error_media_entity;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Device is already turned on by i2c-core with ACPI domain PM.
|
* Device is already turned on by i2c-core with ACPI domain PM.
|
||||||
* Enable runtime PM and turn off the device.
|
* Enable runtime PM and turn off the device.
|
||||||
@ -1796,9 +1792,15 @@ static int imx355_probe(struct i2c_client *client)
|
|||||||
pm_runtime_enable(&client->dev);
|
pm_runtime_enable(&client->dev);
|
||||||
pm_runtime_idle(&client->dev);
|
pm_runtime_idle(&client->dev);
|
||||||
|
|
||||||
|
ret = v4l2_async_register_subdev_sensor(&imx355->sd);
|
||||||
|
if (ret < 0)
|
||||||
|
goto error_media_entity_runtime_pm;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
error_media_entity:
|
error_media_entity_runtime_pm:
|
||||||
|
pm_runtime_disable(&client->dev);
|
||||||
|
pm_runtime_set_suspended(&client->dev);
|
||||||
media_entity_cleanup(&imx355->sd.entity);
|
media_entity_cleanup(&imx355->sd.entity);
|
||||||
|
|
||||||
error_handler_free:
|
error_handler_free:
|
||||||
|
@ -589,6 +589,9 @@ struct ov13b10 {
|
|||||||
|
|
||||||
/* Streaming on/off */
|
/* Streaming on/off */
|
||||||
bool streaming;
|
bool streaming;
|
||||||
|
|
||||||
|
/* True if the device has been identified */
|
||||||
|
bool identified;
|
||||||
};
|
};
|
||||||
|
|
||||||
#define to_ov13b10(_sd) container_of(_sd, struct ov13b10, sd)
|
#define to_ov13b10(_sd) container_of(_sd, struct ov13b10, sd)
|
||||||
@ -1023,12 +1026,42 @@ ov13b10_set_pad_format(struct v4l2_subdev *sd,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Verify chip ID */
|
||||||
|
static int ov13b10_identify_module(struct ov13b10 *ov13b)
|
||||||
|
{
|
||||||
|
struct i2c_client *client = v4l2_get_subdevdata(&ov13b->sd);
|
||||||
|
int ret;
|
||||||
|
u32 val;
|
||||||
|
|
||||||
|
if (ov13b->identified)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
ret = ov13b10_read_reg(ov13b, OV13B10_REG_CHIP_ID,
|
||||||
|
OV13B10_REG_VALUE_24BIT, &val);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
if (val != OV13B10_CHIP_ID) {
|
||||||
|
dev_err(&client->dev, "chip id mismatch: %x!=%x\n",
|
||||||
|
OV13B10_CHIP_ID, val);
|
||||||
|
return -EIO;
|
||||||
|
}
|
||||||
|
|
||||||
|
ov13b->identified = true;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int ov13b10_start_streaming(struct ov13b10 *ov13b)
|
static int ov13b10_start_streaming(struct ov13b10 *ov13b)
|
||||||
{
|
{
|
||||||
struct i2c_client *client = v4l2_get_subdevdata(&ov13b->sd);
|
struct i2c_client *client = v4l2_get_subdevdata(&ov13b->sd);
|
||||||
const struct ov13b10_reg_list *reg_list;
|
const struct ov13b10_reg_list *reg_list;
|
||||||
int ret, link_freq_index;
|
int ret, link_freq_index;
|
||||||
|
|
||||||
|
ret = ov13b10_identify_module(ov13b);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
/* Get out of from software reset */
|
/* Get out of from software reset */
|
||||||
ret = ov13b10_write_reg(ov13b, OV13B10_REG_SOFTWARE_RST,
|
ret = ov13b10_write_reg(ov13b, OV13B10_REG_SOFTWARE_RST,
|
||||||
OV13B10_REG_VALUE_08BIT, OV13B10_SOFTWARE_RST);
|
OV13B10_REG_VALUE_08BIT, OV13B10_SOFTWARE_RST);
|
||||||
@ -1144,27 +1177,6 @@ static int __maybe_unused ov13b10_resume(struct device *dev)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Verify chip ID */
|
|
||||||
static int ov13b10_identify_module(struct ov13b10 *ov13b)
|
|
||||||
{
|
|
||||||
struct i2c_client *client = v4l2_get_subdevdata(&ov13b->sd);
|
|
||||||
int ret;
|
|
||||||
u32 val;
|
|
||||||
|
|
||||||
ret = ov13b10_read_reg(ov13b, OV13B10_REG_CHIP_ID,
|
|
||||||
OV13B10_REG_VALUE_24BIT, &val);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
if (val != OV13B10_CHIP_ID) {
|
|
||||||
dev_err(&client->dev, "chip id mismatch: %x!=%x\n",
|
|
||||||
OV13B10_CHIP_ID, val);
|
|
||||||
return -EIO;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct v4l2_subdev_video_ops ov13b10_video_ops = {
|
static const struct v4l2_subdev_video_ops ov13b10_video_ops = {
|
||||||
.s_stream = ov13b10_set_stream,
|
.s_stream = ov13b10_set_stream,
|
||||||
};
|
};
|
||||||
@ -1379,6 +1391,7 @@ static int ov13b10_check_hwcfg(struct device *dev)
|
|||||||
static int ov13b10_probe(struct i2c_client *client)
|
static int ov13b10_probe(struct i2c_client *client)
|
||||||
{
|
{
|
||||||
struct ov13b10 *ov13b;
|
struct ov13b10 *ov13b;
|
||||||
|
bool full_power;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
/* Check HW config */
|
/* Check HW config */
|
||||||
@ -1395,11 +1408,14 @@ static int ov13b10_probe(struct i2c_client *client)
|
|||||||
/* Initialize subdev */
|
/* Initialize subdev */
|
||||||
v4l2_i2c_subdev_init(&ov13b->sd, client, &ov13b10_subdev_ops);
|
v4l2_i2c_subdev_init(&ov13b->sd, client, &ov13b10_subdev_ops);
|
||||||
|
|
||||||
/* Check module identity */
|
full_power = acpi_dev_state_d0(&client->dev);
|
||||||
ret = ov13b10_identify_module(ov13b);
|
if (full_power) {
|
||||||
if (ret) {
|
/* Check module identity */
|
||||||
dev_err(&client->dev, "failed to find sensor: %d\n", ret);
|
ret = ov13b10_identify_module(ov13b);
|
||||||
return ret;
|
if (ret) {
|
||||||
|
dev_err(&client->dev, "failed to find sensor: %d\n", ret);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Set default mode to max resolution */
|
/* Set default mode to max resolution */
|
||||||
@ -1423,21 +1439,27 @@ static int ov13b10_probe(struct i2c_client *client)
|
|||||||
goto error_handler_free;
|
goto error_handler_free;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = v4l2_async_register_subdev_sensor(&ov13b->sd);
|
|
||||||
if (ret < 0)
|
|
||||||
goto error_media_entity;
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Device is already turned on by i2c-core with ACPI domain PM.
|
* Device is already turned on by i2c-core with ACPI domain PM.
|
||||||
* Enable runtime PM and turn off the device.
|
* Enable runtime PM and turn off the device.
|
||||||
*/
|
*/
|
||||||
pm_runtime_set_active(&client->dev);
|
/* Set the device's state to active if it's in D0 state. */
|
||||||
|
if (full_power)
|
||||||
|
pm_runtime_set_active(&client->dev);
|
||||||
pm_runtime_enable(&client->dev);
|
pm_runtime_enable(&client->dev);
|
||||||
pm_runtime_idle(&client->dev);
|
pm_runtime_idle(&client->dev);
|
||||||
|
|
||||||
|
ret = v4l2_async_register_subdev_sensor(&ov13b->sd);
|
||||||
|
if (ret < 0)
|
||||||
|
goto error_media_entity_runtime_pm;
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
error_media_entity:
|
error_media_entity_runtime_pm:
|
||||||
|
pm_runtime_disable(&client->dev);
|
||||||
|
if (full_power)
|
||||||
|
pm_runtime_set_suspended(&client->dev);
|
||||||
media_entity_cleanup(&ov13b->sd.entity);
|
media_entity_cleanup(&ov13b->sd.entity);
|
||||||
|
|
||||||
error_handler_free:
|
error_handler_free:
|
||||||
@ -1457,6 +1479,7 @@ static void ov13b10_remove(struct i2c_client *client)
|
|||||||
ov13b10_free_controls(ov13b);
|
ov13b10_free_controls(ov13b);
|
||||||
|
|
||||||
pm_runtime_disable(&client->dev);
|
pm_runtime_disable(&client->dev);
|
||||||
|
pm_runtime_set_suspended(&client->dev);
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct dev_pm_ops ov13b10_pm_ops = {
|
static const struct dev_pm_ops ov13b10_pm_ops = {
|
||||||
@ -1480,6 +1503,7 @@ static struct i2c_driver ov13b10_i2c_driver = {
|
|||||||
},
|
},
|
||||||
.probe_new = ov13b10_probe,
|
.probe_new = ov13b10_probe,
|
||||||
.remove = ov13b10_remove,
|
.remove = ov13b10_remove,
|
||||||
|
.flags = I2C_DRV_ACPI_WAIVE_D0_PROBE,
|
||||||
};
|
};
|
||||||
|
|
||||||
module_i2c_driver(ov13b10_i2c_driver);
|
module_i2c_driver(ov13b10_i2c_driver);
|
||||||
|
@ -939,6 +939,7 @@ static void ov9734_remove(struct i2c_client *client)
|
|||||||
media_entity_cleanup(&sd->entity);
|
media_entity_cleanup(&sd->entity);
|
||||||
v4l2_ctrl_handler_free(sd->ctrl_handler);
|
v4l2_ctrl_handler_free(sd->ctrl_handler);
|
||||||
pm_runtime_disable(&client->dev);
|
pm_runtime_disable(&client->dev);
|
||||||
|
pm_runtime_set_suspended(&client->dev);
|
||||||
mutex_destroy(&ov9734->mutex);
|
mutex_destroy(&ov9734->mutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -984,13 +985,6 @@ static int ov9734_probe(struct i2c_client *client)
|
|||||||
goto probe_error_v4l2_ctrl_handler_free;
|
goto probe_error_v4l2_ctrl_handler_free;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = v4l2_async_register_subdev_sensor(&ov9734->sd);
|
|
||||||
if (ret < 0) {
|
|
||||||
dev_err(&client->dev, "failed to register V4L2 subdev: %d",
|
|
||||||
ret);
|
|
||||||
goto probe_error_media_entity_cleanup;
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Device is already turned on by i2c-core with ACPI domain PM.
|
* Device is already turned on by i2c-core with ACPI domain PM.
|
||||||
* Enable runtime PM and turn off the device.
|
* Enable runtime PM and turn off the device.
|
||||||
@ -999,9 +993,18 @@ static int ov9734_probe(struct i2c_client *client)
|
|||||||
pm_runtime_enable(&client->dev);
|
pm_runtime_enable(&client->dev);
|
||||||
pm_runtime_idle(&client->dev);
|
pm_runtime_idle(&client->dev);
|
||||||
|
|
||||||
|
ret = v4l2_async_register_subdev_sensor(&ov9734->sd);
|
||||||
|
if (ret < 0) {
|
||||||
|
dev_err(&client->dev, "failed to register V4L2 subdev: %d",
|
||||||
|
ret);
|
||||||
|
goto probe_error_media_entity_cleanup_pm;
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
probe_error_media_entity_cleanup:
|
probe_error_media_entity_cleanup_pm:
|
||||||
|
pm_runtime_disable(&client->dev);
|
||||||
|
pm_runtime_set_suspended(&client->dev);
|
||||||
media_entity_cleanup(&ov9734->sd.entity);
|
media_entity_cleanup(&ov9734->sd.entity);
|
||||||
|
|
||||||
probe_error_v4l2_ctrl_handler_free:
|
probe_error_v4l2_ctrl_handler_free:
|
||||||
|
@ -974,13 +974,13 @@ static void mtk_jpeg_dec_device_run(void *priv)
|
|||||||
if (ret < 0)
|
if (ret < 0)
|
||||||
goto dec_end;
|
goto dec_end;
|
||||||
|
|
||||||
schedule_delayed_work(&jpeg->job_timeout_work,
|
|
||||||
msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
|
|
||||||
|
|
||||||
mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
|
mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
|
||||||
if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
|
if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
|
||||||
goto dec_end;
|
goto dec_end;
|
||||||
|
|
||||||
|
schedule_delayed_work(&jpeg->job_timeout_work,
|
||||||
|
msecs_to_jiffies(MTK_JPEG_HW_TIMEOUT_MSEC));
|
||||||
|
|
||||||
spin_lock_irqsave(&jpeg->hw_lock, flags);
|
spin_lock_irqsave(&jpeg->hw_lock, flags);
|
||||||
mtk_jpeg_dec_reset(jpeg->reg_base);
|
mtk_jpeg_dec_reset(jpeg->reg_base);
|
||||||
mtk_jpeg_dec_set_config(jpeg->reg_base,
|
mtk_jpeg_dec_set_config(jpeg->reg_base,
|
||||||
|
@ -405,6 +405,10 @@ struct mmc_blk_ioc_data {
|
|||||||
struct mmc_ioc_cmd ic;
|
struct mmc_ioc_cmd ic;
|
||||||
unsigned char *buf;
|
unsigned char *buf;
|
||||||
u64 buf_bytes;
|
u64 buf_bytes;
|
||||||
|
unsigned int flags;
|
||||||
|
#define MMC_BLK_IOC_DROP BIT(0) /* drop this mrq */
|
||||||
|
#define MMC_BLK_IOC_SBC BIT(1) /* use mrq.sbc */
|
||||||
|
|
||||||
struct mmc_rpmb_data *rpmb;
|
struct mmc_rpmb_data *rpmb;
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -470,7 +474,7 @@ static int mmc_blk_ioctl_copy_to_user(struct mmc_ioc_cmd __user *ic_ptr,
|
|||||||
}
|
}
|
||||||
|
|
||||||
static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
||||||
struct mmc_blk_ioc_data *idata)
|
struct mmc_blk_ioc_data **idatas, int i)
|
||||||
{
|
{
|
||||||
struct mmc_command cmd = {}, sbc = {};
|
struct mmc_command cmd = {}, sbc = {};
|
||||||
struct mmc_data data = {};
|
struct mmc_data data = {};
|
||||||
@ -480,10 +484,18 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
|||||||
unsigned int busy_timeout_ms;
|
unsigned int busy_timeout_ms;
|
||||||
int err;
|
int err;
|
||||||
unsigned int target_part;
|
unsigned int target_part;
|
||||||
|
struct mmc_blk_ioc_data *idata = idatas[i];
|
||||||
|
struct mmc_blk_ioc_data *prev_idata = NULL;
|
||||||
|
|
||||||
if (!card || !md || !idata)
|
if (!card || !md || !idata)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (idata->flags & MMC_BLK_IOC_DROP)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
if (idata->flags & MMC_BLK_IOC_SBC)
|
||||||
|
prev_idata = idatas[i - 1];
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The RPMB accesses comes in from the character device, so we
|
* The RPMB accesses comes in from the character device, so we
|
||||||
* need to target these explicitly. Else we just target the
|
* need to target these explicitly. Else we just target the
|
||||||
@ -550,7 +562,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (idata->rpmb) {
|
if (idata->rpmb || prev_idata) {
|
||||||
sbc.opcode = MMC_SET_BLOCK_COUNT;
|
sbc.opcode = MMC_SET_BLOCK_COUNT;
|
||||||
/*
|
/*
|
||||||
* We don't do any blockcount validation because the max size
|
* We don't do any blockcount validation because the max size
|
||||||
@ -558,6 +570,8 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
|||||||
* 'Reliable Write' bit here.
|
* 'Reliable Write' bit here.
|
||||||
*/
|
*/
|
||||||
sbc.arg = data.blocks | (idata->ic.write_flag & BIT(31));
|
sbc.arg = data.blocks | (idata->ic.write_flag & BIT(31));
|
||||||
|
if (prev_idata)
|
||||||
|
sbc.arg = prev_idata->ic.arg;
|
||||||
sbc.flags = MMC_RSP_R1 | MMC_CMD_AC;
|
sbc.flags = MMC_RSP_R1 | MMC_CMD_AC;
|
||||||
mrq.sbc = &sbc;
|
mrq.sbc = &sbc;
|
||||||
}
|
}
|
||||||
@ -575,6 +589,15 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
|
|||||||
mmc_wait_for_req(card->host, &mrq);
|
mmc_wait_for_req(card->host, &mrq);
|
||||||
memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp));
|
memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp));
|
||||||
|
|
||||||
|
if (prev_idata) {
|
||||||
|
memcpy(&prev_idata->ic.response, sbc.resp, sizeof(sbc.resp));
|
||||||
|
if (sbc.error) {
|
||||||
|
dev_err(mmc_dev(card->host), "%s: sbc error %d\n",
|
||||||
|
__func__, sbc.error);
|
||||||
|
return sbc.error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (cmd.error) {
|
if (cmd.error) {
|
||||||
dev_err(mmc_dev(card->host), "%s: cmd error %d\n",
|
dev_err(mmc_dev(card->host), "%s: cmd error %d\n",
|
||||||
__func__, cmd.error);
|
__func__, cmd.error);
|
||||||
@ -1060,6 +1083,20 @@ static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)
|
|||||||
md->reset_done &= ~type;
|
md->reset_done &= ~type;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void mmc_blk_check_sbc(struct mmc_queue_req *mq_rq)
|
||||||
|
{
|
||||||
|
struct mmc_blk_ioc_data **idata = mq_rq->drv_op_data;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = 1; i < mq_rq->ioc_count; i++) {
|
||||||
|
if (idata[i - 1]->ic.opcode == MMC_SET_BLOCK_COUNT &&
|
||||||
|
mmc_op_multi(idata[i]->ic.opcode)) {
|
||||||
|
idata[i - 1]->flags |= MMC_BLK_IOC_DROP;
|
||||||
|
idata[i]->flags |= MMC_BLK_IOC_SBC;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The non-block commands come back from the block layer after it queued it and
|
* The non-block commands come back from the block layer after it queued it and
|
||||||
* processed it with all other requests and then they get issued in this
|
* processed it with all other requests and then they get issued in this
|
||||||
@ -1087,11 +1124,14 @@ static void mmc_blk_issue_drv_op(struct mmc_queue *mq, struct request *req)
|
|||||||
if (ret)
|
if (ret)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mmc_blk_check_sbc(mq_rq);
|
||||||
|
|
||||||
fallthrough;
|
fallthrough;
|
||||||
case MMC_DRV_OP_IOCTL_RPMB:
|
case MMC_DRV_OP_IOCTL_RPMB:
|
||||||
idata = mq_rq->drv_op_data;
|
idata = mq_rq->drv_op_data;
|
||||||
for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
|
for (i = 0, ret = 0; i < mq_rq->ioc_count; i++) {
|
||||||
ret = __mmc_blk_ioctl_cmd(card, md, idata[i]);
|
ret = __mmc_blk_ioctl_cmd(card, md, idata, i);
|
||||||
if (ret)
|
if (ret)
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -15,7 +15,7 @@
|
|||||||
#include <linux/slab.h>
|
#include <linux/slab.h>
|
||||||
#include <linux/module.h>
|
#include <linux/module.h>
|
||||||
#include <linux/bio.h>
|
#include <linux/bio.h>
|
||||||
#include <linux/dma-mapping.h>
|
#include <linux/dma-direction.h>
|
||||||
#include <linux/crc7.h>
|
#include <linux/crc7.h>
|
||||||
#include <linux/crc-itu-t.h>
|
#include <linux/crc-itu-t.h>
|
||||||
#include <linux/scatterlist.h>
|
#include <linux/scatterlist.h>
|
||||||
@ -119,19 +119,14 @@ struct mmc_spi_host {
|
|||||||
struct spi_transfer status;
|
struct spi_transfer status;
|
||||||
struct spi_message readback;
|
struct spi_message readback;
|
||||||
|
|
||||||
/* underlying DMA-aware controller, or null */
|
|
||||||
struct device *dma_dev;
|
|
||||||
|
|
||||||
/* buffer used for commands and for message "overhead" */
|
/* buffer used for commands and for message "overhead" */
|
||||||
struct scratch *data;
|
struct scratch *data;
|
||||||
dma_addr_t data_dma;
|
|
||||||
|
|
||||||
/* Specs say to write ones most of the time, even when the card
|
/* Specs say to write ones most of the time, even when the card
|
||||||
* has no need to read its input data; and many cards won't care.
|
* has no need to read its input data; and many cards won't care.
|
||||||
* This is our source of those ones.
|
* This is our source of those ones.
|
||||||
*/
|
*/
|
||||||
void *ones;
|
void *ones;
|
||||||
dma_addr_t ones_dma;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
@ -147,11 +142,8 @@ static inline int mmc_cs_off(struct mmc_spi_host *host)
|
|||||||
return spi_setup(host->spi);
|
return spi_setup(host->spi);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int
|
static int mmc_spi_readbytes(struct mmc_spi_host *host, unsigned int len)
|
||||||
mmc_spi_readbytes(struct mmc_spi_host *host, unsigned len)
|
|
||||||
{
|
{
|
||||||
int status;
|
|
||||||
|
|
||||||
if (len > sizeof(*host->data)) {
|
if (len > sizeof(*host->data)) {
|
||||||
WARN_ON(1);
|
WARN_ON(1);
|
||||||
return -EIO;
|
return -EIO;
|
||||||
@ -159,19 +151,7 @@ mmc_spi_readbytes(struct mmc_spi_host *host, unsigned len)
|
|||||||
|
|
||||||
host->status.len = len;
|
host->status.len = len;
|
||||||
|
|
||||||
if (host->dma_dev)
|
return spi_sync_locked(host->spi, &host->readback);
|
||||||
dma_sync_single_for_device(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*host->data),
|
|
||||||
DMA_FROM_DEVICE);
|
|
||||||
|
|
||||||
status = spi_sync_locked(host->spi, &host->readback);
|
|
||||||
|
|
||||||
if (host->dma_dev)
|
|
||||||
dma_sync_single_for_cpu(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*host->data),
|
|
||||||
DMA_FROM_DEVICE);
|
|
||||||
|
|
||||||
return status;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mmc_spi_skip(struct mmc_spi_host *host, unsigned long timeout,
|
static int mmc_spi_skip(struct mmc_spi_host *host, unsigned long timeout,
|
||||||
@ -506,23 +486,11 @@ mmc_spi_command_send(struct mmc_spi_host *host,
|
|||||||
t = &host->t;
|
t = &host->t;
|
||||||
memset(t, 0, sizeof(*t));
|
memset(t, 0, sizeof(*t));
|
||||||
t->tx_buf = t->rx_buf = data->status;
|
t->tx_buf = t->rx_buf = data->status;
|
||||||
t->tx_dma = t->rx_dma = host->data_dma;
|
|
||||||
t->len = cp - data->status;
|
t->len = cp - data->status;
|
||||||
t->cs_change = 1;
|
t->cs_change = 1;
|
||||||
spi_message_add_tail(t, &host->m);
|
spi_message_add_tail(t, &host->m);
|
||||||
|
|
||||||
if (host->dma_dev) {
|
|
||||||
host->m.is_dma_mapped = 1;
|
|
||||||
dma_sync_single_for_device(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*host->data),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
}
|
|
||||||
status = spi_sync_locked(host->spi, &host->m);
|
status = spi_sync_locked(host->spi, &host->m);
|
||||||
|
|
||||||
if (host->dma_dev)
|
|
||||||
dma_sync_single_for_cpu(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*host->data),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
if (status < 0) {
|
if (status < 0) {
|
||||||
dev_dbg(&host->spi->dev, " ... write returned %d\n", status);
|
dev_dbg(&host->spi->dev, " ... write returned %d\n", status);
|
||||||
cmd->error = status;
|
cmd->error = status;
|
||||||
@ -540,9 +508,6 @@ mmc_spi_command_send(struct mmc_spi_host *host,
|
|||||||
* We always provide TX data for data and CRC. The MMC/SD protocol
|
* We always provide TX data for data and CRC. The MMC/SD protocol
|
||||||
* requires us to write ones; but Linux defaults to writing zeroes;
|
* requires us to write ones; but Linux defaults to writing zeroes;
|
||||||
* so we explicitly initialize it to all ones on RX paths.
|
* so we explicitly initialize it to all ones on RX paths.
|
||||||
*
|
|
||||||
* We also handle DMA mapping, so the underlying SPI controller does
|
|
||||||
* not need to (re)do it for each message.
|
|
||||||
*/
|
*/
|
||||||
static void
|
static void
|
||||||
mmc_spi_setup_data_message(
|
mmc_spi_setup_data_message(
|
||||||
@ -552,11 +517,8 @@ mmc_spi_setup_data_message(
|
|||||||
{
|
{
|
||||||
struct spi_transfer *t;
|
struct spi_transfer *t;
|
||||||
struct scratch *scratch = host->data;
|
struct scratch *scratch = host->data;
|
||||||
dma_addr_t dma = host->data_dma;
|
|
||||||
|
|
||||||
spi_message_init(&host->m);
|
spi_message_init(&host->m);
|
||||||
if (dma)
|
|
||||||
host->m.is_dma_mapped = 1;
|
|
||||||
|
|
||||||
/* for reads, readblock() skips 0xff bytes before finding
|
/* for reads, readblock() skips 0xff bytes before finding
|
||||||
* the token; for writes, this transfer issues that token.
|
* the token; for writes, this transfer issues that token.
|
||||||
@ -570,8 +532,6 @@ mmc_spi_setup_data_message(
|
|||||||
else
|
else
|
||||||
scratch->data_token = SPI_TOKEN_SINGLE;
|
scratch->data_token = SPI_TOKEN_SINGLE;
|
||||||
t->tx_buf = &scratch->data_token;
|
t->tx_buf = &scratch->data_token;
|
||||||
if (dma)
|
|
||||||
t->tx_dma = dma + offsetof(struct scratch, data_token);
|
|
||||||
spi_message_add_tail(t, &host->m);
|
spi_message_add_tail(t, &host->m);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -581,7 +541,6 @@ mmc_spi_setup_data_message(
|
|||||||
t = &host->t;
|
t = &host->t;
|
||||||
memset(t, 0, sizeof(*t));
|
memset(t, 0, sizeof(*t));
|
||||||
t->tx_buf = host->ones;
|
t->tx_buf = host->ones;
|
||||||
t->tx_dma = host->ones_dma;
|
|
||||||
/* length and actual buffer info are written later */
|
/* length and actual buffer info are written later */
|
||||||
spi_message_add_tail(t, &host->m);
|
spi_message_add_tail(t, &host->m);
|
||||||
|
|
||||||
@ -591,14 +550,9 @@ mmc_spi_setup_data_message(
|
|||||||
if (direction == DMA_TO_DEVICE) {
|
if (direction == DMA_TO_DEVICE) {
|
||||||
/* the actual CRC may get written later */
|
/* the actual CRC may get written later */
|
||||||
t->tx_buf = &scratch->crc_val;
|
t->tx_buf = &scratch->crc_val;
|
||||||
if (dma)
|
|
||||||
t->tx_dma = dma + offsetof(struct scratch, crc_val);
|
|
||||||
} else {
|
} else {
|
||||||
t->tx_buf = host->ones;
|
t->tx_buf = host->ones;
|
||||||
t->tx_dma = host->ones_dma;
|
|
||||||
t->rx_buf = &scratch->crc_val;
|
t->rx_buf = &scratch->crc_val;
|
||||||
if (dma)
|
|
||||||
t->rx_dma = dma + offsetof(struct scratch, crc_val);
|
|
||||||
}
|
}
|
||||||
spi_message_add_tail(t, &host->m);
|
spi_message_add_tail(t, &host->m);
|
||||||
|
|
||||||
@ -621,10 +575,7 @@ mmc_spi_setup_data_message(
|
|||||||
memset(t, 0, sizeof(*t));
|
memset(t, 0, sizeof(*t));
|
||||||
t->len = (direction == DMA_TO_DEVICE) ? sizeof(scratch->status) : 1;
|
t->len = (direction == DMA_TO_DEVICE) ? sizeof(scratch->status) : 1;
|
||||||
t->tx_buf = host->ones;
|
t->tx_buf = host->ones;
|
||||||
t->tx_dma = host->ones_dma;
|
|
||||||
t->rx_buf = scratch->status;
|
t->rx_buf = scratch->status;
|
||||||
if (dma)
|
|
||||||
t->rx_dma = dma + offsetof(struct scratch, status);
|
|
||||||
t->cs_change = 1;
|
t->cs_change = 1;
|
||||||
spi_message_add_tail(t, &host->m);
|
spi_message_add_tail(t, &host->m);
|
||||||
}
|
}
|
||||||
@ -653,23 +604,13 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
|
|||||||
|
|
||||||
if (host->mmc->use_spi_crc)
|
if (host->mmc->use_spi_crc)
|
||||||
scratch->crc_val = cpu_to_be16(crc_itu_t(0, t->tx_buf, t->len));
|
scratch->crc_val = cpu_to_be16(crc_itu_t(0, t->tx_buf, t->len));
|
||||||
if (host->dma_dev)
|
|
||||||
dma_sync_single_for_device(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*scratch),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
|
|
||||||
status = spi_sync_locked(spi, &host->m);
|
status = spi_sync_locked(spi, &host->m);
|
||||||
|
|
||||||
if (status != 0) {
|
if (status != 0) {
|
||||||
dev_dbg(&spi->dev, "write error (%d)\n", status);
|
dev_dbg(&spi->dev, "write error (%d)\n", status);
|
||||||
return status;
|
return status;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (host->dma_dev)
|
|
||||||
dma_sync_single_for_cpu(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*scratch),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Get the transmission data-response reply. It must follow
|
* Get the transmission data-response reply. It must follow
|
||||||
* immediately after the data block we transferred. This reply
|
* immediately after the data block we transferred. This reply
|
||||||
@ -718,8 +659,6 @@ mmc_spi_writeblock(struct mmc_spi_host *host, struct spi_transfer *t,
|
|||||||
}
|
}
|
||||||
|
|
||||||
t->tx_buf += t->len;
|
t->tx_buf += t->len;
|
||||||
if (host->dma_dev)
|
|
||||||
t->tx_dma += t->len;
|
|
||||||
|
|
||||||
/* Return when not busy. If we didn't collect that status yet,
|
/* Return when not busy. If we didn't collect that status yet,
|
||||||
* we'll need some more I/O.
|
* we'll need some more I/O.
|
||||||
@ -783,30 +722,12 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
|
|||||||
}
|
}
|
||||||
leftover = status << 1;
|
leftover = status << 1;
|
||||||
|
|
||||||
if (host->dma_dev) {
|
|
||||||
dma_sync_single_for_device(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*scratch),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
dma_sync_single_for_device(host->dma_dev,
|
|
||||||
t->rx_dma, t->len,
|
|
||||||
DMA_FROM_DEVICE);
|
|
||||||
}
|
|
||||||
|
|
||||||
status = spi_sync_locked(spi, &host->m);
|
status = spi_sync_locked(spi, &host->m);
|
||||||
if (status < 0) {
|
if (status < 0) {
|
||||||
dev_dbg(&spi->dev, "read error %d\n", status);
|
dev_dbg(&spi->dev, "read error %d\n", status);
|
||||||
return status;
|
return status;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (host->dma_dev) {
|
|
||||||
dma_sync_single_for_cpu(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*scratch),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
dma_sync_single_for_cpu(host->dma_dev,
|
|
||||||
t->rx_dma, t->len,
|
|
||||||
DMA_FROM_DEVICE);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (bitshift) {
|
if (bitshift) {
|
||||||
/* Walk through the data and the crc and do
|
/* Walk through the data and the crc and do
|
||||||
* all the magic to get byte-aligned data.
|
* all the magic to get byte-aligned data.
|
||||||
@ -841,8 +762,6 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
|
|||||||
}
|
}
|
||||||
|
|
||||||
t->rx_buf += t->len;
|
t->rx_buf += t->len;
|
||||||
if (host->dma_dev)
|
|
||||||
t->rx_dma += t->len;
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
@ -857,7 +776,6 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
|
|||||||
struct mmc_data *data, u32 blk_size)
|
struct mmc_data *data, u32 blk_size)
|
||||||
{
|
{
|
||||||
struct spi_device *spi = host->spi;
|
struct spi_device *spi = host->spi;
|
||||||
struct device *dma_dev = host->dma_dev;
|
|
||||||
struct spi_transfer *t;
|
struct spi_transfer *t;
|
||||||
enum dma_data_direction direction = mmc_get_dma_dir(data);
|
enum dma_data_direction direction = mmc_get_dma_dir(data);
|
||||||
struct scatterlist *sg;
|
struct scatterlist *sg;
|
||||||
@ -884,31 +802,8 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
|
|||||||
*/
|
*/
|
||||||
for_each_sg(data->sg, sg, data->sg_len, n_sg) {
|
for_each_sg(data->sg, sg, data->sg_len, n_sg) {
|
||||||
int status = 0;
|
int status = 0;
|
||||||
dma_addr_t dma_addr = 0;
|
|
||||||
void *kmap_addr;
|
void *kmap_addr;
|
||||||
unsigned length = sg->length;
|
unsigned length = sg->length;
|
||||||
enum dma_data_direction dir = direction;
|
|
||||||
|
|
||||||
/* set up dma mapping for controller drivers that might
|
|
||||||
* use DMA ... though they may fall back to PIO
|
|
||||||
*/
|
|
||||||
if (dma_dev) {
|
|
||||||
/* never invalidate whole *shared* pages ... */
|
|
||||||
if ((sg->offset != 0 || length != PAGE_SIZE)
|
|
||||||
&& dir == DMA_FROM_DEVICE)
|
|
||||||
dir = DMA_BIDIRECTIONAL;
|
|
||||||
|
|
||||||
dma_addr = dma_map_page(dma_dev, sg_page(sg), 0,
|
|
||||||
PAGE_SIZE, dir);
|
|
||||||
if (dma_mapping_error(dma_dev, dma_addr)) {
|
|
||||||
data->error = -EFAULT;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
if (direction == DMA_TO_DEVICE)
|
|
||||||
t->tx_dma = dma_addr + sg->offset;
|
|
||||||
else
|
|
||||||
t->rx_dma = dma_addr + sg->offset;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* allow pio too; we don't allow highmem */
|
/* allow pio too; we don't allow highmem */
|
||||||
kmap_addr = kmap(sg_page(sg));
|
kmap_addr = kmap(sg_page(sg));
|
||||||
@ -941,8 +836,6 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
|
|||||||
if (direction == DMA_FROM_DEVICE)
|
if (direction == DMA_FROM_DEVICE)
|
||||||
flush_dcache_page(sg_page(sg));
|
flush_dcache_page(sg_page(sg));
|
||||||
kunmap(sg_page(sg));
|
kunmap(sg_page(sg));
|
||||||
if (dma_dev)
|
|
||||||
dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir);
|
|
||||||
|
|
||||||
if (status < 0) {
|
if (status < 0) {
|
||||||
data->error = status;
|
data->error = status;
|
||||||
@ -977,21 +870,9 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
|
|||||||
scratch->status[0] = SPI_TOKEN_STOP_TRAN;
|
scratch->status[0] = SPI_TOKEN_STOP_TRAN;
|
||||||
|
|
||||||
host->early_status.tx_buf = host->early_status.rx_buf;
|
host->early_status.tx_buf = host->early_status.rx_buf;
|
||||||
host->early_status.tx_dma = host->early_status.rx_dma;
|
|
||||||
host->early_status.len = statlen;
|
host->early_status.len = statlen;
|
||||||
|
|
||||||
if (host->dma_dev)
|
|
||||||
dma_sync_single_for_device(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*scratch),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
|
|
||||||
tmp = spi_sync_locked(spi, &host->m);
|
tmp = spi_sync_locked(spi, &host->m);
|
||||||
|
|
||||||
if (host->dma_dev)
|
|
||||||
dma_sync_single_for_cpu(host->dma_dev,
|
|
||||||
host->data_dma, sizeof(*scratch),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
|
|
||||||
if (tmp < 0) {
|
if (tmp < 0) {
|
||||||
if (!data->error)
|
if (!data->error)
|
||||||
data->error = tmp;
|
data->error = tmp;
|
||||||
@ -1265,52 +1146,6 @@ mmc_spi_detect_irq(int irq, void *mmc)
|
|||||||
return IRQ_HANDLED;
|
return IRQ_HANDLED;
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_HAS_DMA
|
|
||||||
static int mmc_spi_dma_alloc(struct mmc_spi_host *host)
|
|
||||||
{
|
|
||||||
struct spi_device *spi = host->spi;
|
|
||||||
struct device *dev;
|
|
||||||
|
|
||||||
if (!spi->master->dev.parent->dma_mask)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
dev = spi->master->dev.parent;
|
|
||||||
|
|
||||||
host->ones_dma = dma_map_single(dev, host->ones, MMC_SPI_BLOCKSIZE,
|
|
||||||
DMA_TO_DEVICE);
|
|
||||||
if (dma_mapping_error(dev, host->ones_dma))
|
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
host->data_dma = dma_map_single(dev, host->data, sizeof(*host->data),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
if (dma_mapping_error(dev, host->data_dma)) {
|
|
||||||
dma_unmap_single(dev, host->ones_dma, MMC_SPI_BLOCKSIZE,
|
|
||||||
DMA_TO_DEVICE);
|
|
||||||
return -ENOMEM;
|
|
||||||
}
|
|
||||||
|
|
||||||
dma_sync_single_for_cpu(dev, host->data_dma, sizeof(*host->data),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
|
|
||||||
host->dma_dev = dev;
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void mmc_spi_dma_free(struct mmc_spi_host *host)
|
|
||||||
{
|
|
||||||
if (!host->dma_dev)
|
|
||||||
return;
|
|
||||||
|
|
||||||
dma_unmap_single(host->dma_dev, host->ones_dma, MMC_SPI_BLOCKSIZE,
|
|
||||||
DMA_TO_DEVICE);
|
|
||||||
dma_unmap_single(host->dma_dev, host->data_dma, sizeof(*host->data),
|
|
||||||
DMA_BIDIRECTIONAL);
|
|
||||||
}
|
|
||||||
#else
|
|
||||||
static inline int mmc_spi_dma_alloc(struct mmc_spi_host *host) { return 0; }
|
|
||||||
static inline void mmc_spi_dma_free(struct mmc_spi_host *host) {}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
static int mmc_spi_probe(struct spi_device *spi)
|
static int mmc_spi_probe(struct spi_device *spi)
|
||||||
{
|
{
|
||||||
void *ones;
|
void *ones;
|
||||||
@ -1402,24 +1237,17 @@ static int mmc_spi_probe(struct spi_device *spi)
|
|||||||
host->powerup_msecs = 250;
|
host->powerup_msecs = 250;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* preallocate dma buffers */
|
/* Preallocate buffers */
|
||||||
host->data = kmalloc(sizeof(*host->data), GFP_KERNEL);
|
host->data = kmalloc(sizeof(*host->data), GFP_KERNEL);
|
||||||
if (!host->data)
|
if (!host->data)
|
||||||
goto fail_nobuf1;
|
goto fail_nobuf1;
|
||||||
|
|
||||||
status = mmc_spi_dma_alloc(host);
|
|
||||||
if (status)
|
|
||||||
goto fail_dma;
|
|
||||||
|
|
||||||
/* setup message for status/busy readback */
|
/* setup message for status/busy readback */
|
||||||
spi_message_init(&host->readback);
|
spi_message_init(&host->readback);
|
||||||
host->readback.is_dma_mapped = (host->dma_dev != NULL);
|
|
||||||
|
|
||||||
spi_message_add_tail(&host->status, &host->readback);
|
spi_message_add_tail(&host->status, &host->readback);
|
||||||
host->status.tx_buf = host->ones;
|
host->status.tx_buf = host->ones;
|
||||||
host->status.tx_dma = host->ones_dma;
|
|
||||||
host->status.rx_buf = &host->data->status;
|
host->status.rx_buf = &host->data->status;
|
||||||
host->status.rx_dma = host->data_dma + offsetof(struct scratch, status);
|
|
||||||
host->status.cs_change = 1;
|
host->status.cs_change = 1;
|
||||||
|
|
||||||
/* register card detect irq */
|
/* register card detect irq */
|
||||||
@ -1464,9 +1292,8 @@ static int mmc_spi_probe(struct spi_device *spi)
|
|||||||
if (!status)
|
if (!status)
|
||||||
has_ro = true;
|
has_ro = true;
|
||||||
|
|
||||||
dev_info(&spi->dev, "SD/MMC host %s%s%s%s%s\n",
|
dev_info(&spi->dev, "SD/MMC host %s%s%s%s\n",
|
||||||
dev_name(&mmc->class_dev),
|
dev_name(&mmc->class_dev),
|
||||||
host->dma_dev ? "" : ", no DMA",
|
|
||||||
has_ro ? "" : ", no WP",
|
has_ro ? "" : ", no WP",
|
||||||
(host->pdata && host->pdata->setpower)
|
(host->pdata && host->pdata->setpower)
|
||||||
? "" : ", no poweroff",
|
? "" : ", no poweroff",
|
||||||
@ -1477,8 +1304,6 @@ static int mmc_spi_probe(struct spi_device *spi)
|
|||||||
fail_gpiod_request:
|
fail_gpiod_request:
|
||||||
mmc_remove_host(mmc);
|
mmc_remove_host(mmc);
|
||||||
fail_glue_init:
|
fail_glue_init:
|
||||||
mmc_spi_dma_free(host);
|
|
||||||
fail_dma:
|
|
||||||
kfree(host->data);
|
kfree(host->data);
|
||||||
fail_nobuf1:
|
fail_nobuf1:
|
||||||
mmc_spi_put_pdata(spi);
|
mmc_spi_put_pdata(spi);
|
||||||
@ -1500,7 +1325,6 @@ static void mmc_spi_remove(struct spi_device *spi)
|
|||||||
|
|
||||||
mmc_remove_host(mmc);
|
mmc_remove_host(mmc);
|
||||||
|
|
||||||
mmc_spi_dma_free(host);
|
|
||||||
kfree(host->data);
|
kfree(host->data);
|
||||||
kfree(host->ones);
|
kfree(host->ones);
|
||||||
|
|
||||||
|
@ -12269,6 +12269,11 @@ static int bnxt_fw_init_one_p1(struct bnxt *bp)
|
|||||||
|
|
||||||
bp->fw_cap = 0;
|
bp->fw_cap = 0;
|
||||||
rc = bnxt_hwrm_ver_get(bp);
|
rc = bnxt_hwrm_ver_get(bp);
|
||||||
|
/* FW may be unresponsive after FLR. FLR must complete within 100 msec
|
||||||
|
* so wait before continuing with recovery.
|
||||||
|
*/
|
||||||
|
if (rc)
|
||||||
|
msleep(100);
|
||||||
bnxt_try_map_fw_health_reg(bp);
|
bnxt_try_map_fw_health_reg(bp);
|
||||||
if (rc) {
|
if (rc) {
|
||||||
rc = bnxt_try_recover_fw(bp);
|
rc = bnxt_try_recover_fw(bp);
|
||||||
|
@ -1917,6 +1917,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
|
|||||||
|
|
||||||
/* if any of the above changed restart the FEC */
|
/* if any of the above changed restart the FEC */
|
||||||
if (status_change) {
|
if (status_change) {
|
||||||
|
netif_stop_queue(ndev);
|
||||||
napi_disable(&fep->napi);
|
napi_disable(&fep->napi);
|
||||||
netif_tx_lock_bh(ndev);
|
netif_tx_lock_bh(ndev);
|
||||||
fec_restart(ndev);
|
fec_restart(ndev);
|
||||||
@ -1926,6 +1927,7 @@ static void fec_enet_adjust_link(struct net_device *ndev)
|
|||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
if (fep->link) {
|
if (fep->link) {
|
||||||
|
netif_stop_queue(ndev);
|
||||||
napi_disable(&fep->napi);
|
napi_disable(&fep->napi);
|
||||||
netif_tx_lock_bh(ndev);
|
netif_tx_lock_bh(ndev);
|
||||||
fec_stop(ndev);
|
fec_stop(ndev);
|
||||||
|
@ -614,12 +614,38 @@ static void mvpp23_bm_set_8pool_mode(struct mvpp2 *priv)
|
|||||||
mvpp2_write(priv, MVPP22_BM_POOL_BASE_ADDR_HIGH_REG, val);
|
mvpp2_write(priv, MVPP22_BM_POOL_BASE_ADDR_HIGH_REG, val);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Cleanup pool before actual initialization in the OS */
|
||||||
|
static void mvpp2_bm_pool_cleanup(struct mvpp2 *priv, int pool_id)
|
||||||
|
{
|
||||||
|
unsigned int thread = mvpp2_cpu_to_thread(priv, get_cpu());
|
||||||
|
u32 val;
|
||||||
|
int i;
|
||||||
|
|
||||||
|
/* Drain the BM from all possible residues left by firmware */
|
||||||
|
for (i = 0; i < MVPP2_BM_POOL_SIZE_MAX; i++)
|
||||||
|
mvpp2_thread_read(priv, thread, MVPP2_BM_PHY_ALLOC_REG(pool_id));
|
||||||
|
|
||||||
|
put_cpu();
|
||||||
|
|
||||||
|
/* Stop the BM pool */
|
||||||
|
val = mvpp2_read(priv, MVPP2_BM_POOL_CTRL_REG(pool_id));
|
||||||
|
val |= MVPP2_BM_STOP_MASK;
|
||||||
|
mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(pool_id), val);
|
||||||
|
}
|
||||||
|
|
||||||
static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
|
static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
|
||||||
{
|
{
|
||||||
enum dma_data_direction dma_dir = DMA_FROM_DEVICE;
|
enum dma_data_direction dma_dir = DMA_FROM_DEVICE;
|
||||||
int i, err, poolnum = MVPP2_BM_POOLS_NUM;
|
int i, err, poolnum = MVPP2_BM_POOLS_NUM;
|
||||||
struct mvpp2_port *port;
|
struct mvpp2_port *port;
|
||||||
|
|
||||||
|
if (priv->percpu_pools)
|
||||||
|
poolnum = mvpp2_get_nrxqs(priv) * 2;
|
||||||
|
|
||||||
|
/* Clean up the pool state in case it contains stale state */
|
||||||
|
for (i = 0; i < poolnum; i++)
|
||||||
|
mvpp2_bm_pool_cleanup(priv, i);
|
||||||
|
|
||||||
if (priv->percpu_pools) {
|
if (priv->percpu_pools) {
|
||||||
for (i = 0; i < priv->port_count; i++) {
|
for (i = 0; i < priv->port_count; i++) {
|
||||||
port = priv->port_list[i];
|
port = priv->port_list[i];
|
||||||
@ -629,7 +655,6 @@ static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
poolnum = mvpp2_get_nrxqs(priv) * 2;
|
|
||||||
for (i = 0; i < poolnum; i++) {
|
for (i = 0; i < poolnum; i++) {
|
||||||
/* the pool in use */
|
/* the pool in use */
|
||||||
int pn = i / (poolnum / 2);
|
int pn = i / (poolnum / 2);
|
||||||
|
@ -436,6 +436,7 @@ static int fs_any_create_groups(struct mlx5e_flow_table *ft)
|
|||||||
in = kvzalloc(inlen, GFP_KERNEL);
|
in = kvzalloc(inlen, GFP_KERNEL);
|
||||||
if (!in || !ft->g) {
|
if (!in || !ft->g) {
|
||||||
kfree(ft->g);
|
kfree(ft->g);
|
||||||
|
ft->g = NULL;
|
||||||
kvfree(in);
|
kvfree(in);
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
@ -990,8 +990,8 @@ void mlx5e_build_sq_param(struct mlx5_core_dev *mdev,
|
|||||||
void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
|
void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
|
||||||
bool allow_swp;
|
bool allow_swp;
|
||||||
|
|
||||||
allow_swp =
|
allow_swp = mlx5_geneve_tx_allowed(mdev) ||
|
||||||
mlx5_geneve_tx_allowed(mdev) || !!mlx5_ipsec_device_caps(mdev);
|
(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_CRYPTO);
|
||||||
mlx5e_build_sq_param_common(mdev, param);
|
mlx5e_build_sq_param_common(mdev, param);
|
||||||
MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size);
|
MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size);
|
||||||
MLX5_SET(sqc, sqc, allow_swp, allow_swp);
|
MLX5_SET(sqc, sqc, allow_swp, allow_swp);
|
||||||
|
@ -34,7 +34,6 @@
|
|||||||
#ifndef __MLX5E_IPSEC_H__
|
#ifndef __MLX5E_IPSEC_H__
|
||||||
#define __MLX5E_IPSEC_H__
|
#define __MLX5E_IPSEC_H__
|
||||||
|
|
||||||
#ifdef CONFIG_MLX5_EN_IPSEC
|
|
||||||
|
|
||||||
#include <linux/mlx5/device.h>
|
#include <linux/mlx5/device.h>
|
||||||
#include <net/xfrm.h>
|
#include <net/xfrm.h>
|
||||||
@ -146,6 +145,7 @@ struct mlx5e_ipsec_sa_entry {
|
|||||||
struct mlx5e_ipsec_modify_state_work modify_work;
|
struct mlx5e_ipsec_modify_state_work modify_work;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
#ifdef CONFIG_MLX5_EN_IPSEC
|
||||||
int mlx5e_ipsec_init(struct mlx5e_priv *priv);
|
int mlx5e_ipsec_init(struct mlx5e_priv *priv);
|
||||||
void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv);
|
void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv);
|
||||||
void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv);
|
void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv);
|
||||||
|
@ -255,11 +255,13 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
|
|||||||
|
|
||||||
ft->g = kcalloc(MLX5E_ARFS_NUM_GROUPS,
|
ft->g = kcalloc(MLX5E_ARFS_NUM_GROUPS,
|
||||||
sizeof(*ft->g), GFP_KERNEL);
|
sizeof(*ft->g), GFP_KERNEL);
|
||||||
in = kvzalloc(inlen, GFP_KERNEL);
|
if (!ft->g)
|
||||||
if (!in || !ft->g) {
|
|
||||||
kfree(ft->g);
|
|
||||||
kvfree(in);
|
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
in = kvzalloc(inlen, GFP_KERNEL);
|
||||||
|
if (!in) {
|
||||||
|
err = -ENOMEM;
|
||||||
|
goto err_free_g;
|
||||||
}
|
}
|
||||||
|
|
||||||
mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
|
mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
|
||||||
@ -279,7 +281,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
|
|||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
goto out;
|
goto err_free_in;
|
||||||
}
|
}
|
||||||
|
|
||||||
switch (type) {
|
switch (type) {
|
||||||
@ -301,7 +303,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
|
|||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
goto out;
|
goto err_free_in;
|
||||||
}
|
}
|
||||||
|
|
||||||
MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
|
MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
|
||||||
@ -310,7 +312,7 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
|
|||||||
MLX5_SET_CFG(in, end_flow_index, ix - 1);
|
MLX5_SET_CFG(in, end_flow_index, ix - 1);
|
||||||
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
|
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
|
||||||
if (IS_ERR(ft->g[ft->num_groups]))
|
if (IS_ERR(ft->g[ft->num_groups]))
|
||||||
goto err;
|
goto err_clean_group;
|
||||||
ft->num_groups++;
|
ft->num_groups++;
|
||||||
|
|
||||||
memset(in, 0, inlen);
|
memset(in, 0, inlen);
|
||||||
@ -319,18 +321,20 @@ static int arfs_create_groups(struct mlx5e_flow_table *ft,
|
|||||||
MLX5_SET_CFG(in, end_flow_index, ix - 1);
|
MLX5_SET_CFG(in, end_flow_index, ix - 1);
|
||||||
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
|
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
|
||||||
if (IS_ERR(ft->g[ft->num_groups]))
|
if (IS_ERR(ft->g[ft->num_groups]))
|
||||||
goto err;
|
goto err_clean_group;
|
||||||
ft->num_groups++;
|
ft->num_groups++;
|
||||||
|
|
||||||
kvfree(in);
|
kvfree(in);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err:
|
err_clean_group:
|
||||||
err = PTR_ERR(ft->g[ft->num_groups]);
|
err = PTR_ERR(ft->g[ft->num_groups]);
|
||||||
ft->g[ft->num_groups] = NULL;
|
ft->g[ft->num_groups] = NULL;
|
||||||
out:
|
err_free_in:
|
||||||
kvfree(in);
|
kvfree(in);
|
||||||
|
err_free_g:
|
||||||
|
kfree(ft->g);
|
||||||
|
ft->g = NULL;
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -98,7 +98,7 @@ static int create_aso_cq(struct mlx5_aso_cq *cq, void *cqc_data)
|
|||||||
mlx5_fill_page_frag_array(&cq->wq_ctrl.buf,
|
mlx5_fill_page_frag_array(&cq->wq_ctrl.buf,
|
||||||
(__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas));
|
(__be64 *)MLX5_ADDR_OF(create_cq_in, in, pas));
|
||||||
|
|
||||||
MLX5_SET(cqc, cqc, cq_period_mode, DIM_CQ_PERIOD_MODE_START_FROM_EQE);
|
MLX5_SET(cqc, cqc, cq_period_mode, MLX5_CQ_PERIOD_MODE_START_FROM_EQE);
|
||||||
MLX5_SET(cqc, cqc, c_eqn_or_apu_element, eqn);
|
MLX5_SET(cqc, cqc, c_eqn_or_apu_element, eqn);
|
||||||
MLX5_SET(cqc, cqc, uar_page, mdev->priv.uar->index);
|
MLX5_SET(cqc, cqc, uar_page, mdev->priv.uar->index);
|
||||||
MLX5_SET(cqc, cqc, log_page_size, cq->wq_ctrl.buf.page_shift -
|
MLX5_SET(cqc, cqc, log_page_size, cq->wq_ctrl.buf.page_shift -
|
||||||
|
@ -673,6 +673,7 @@ int mlx5dr_actions_build_ste_arr(struct mlx5dr_matcher *matcher,
|
|||||||
switch (action_type) {
|
switch (action_type) {
|
||||||
case DR_ACTION_TYP_DROP:
|
case DR_ACTION_TYP_DROP:
|
||||||
attr.final_icm_addr = nic_dmn->drop_icm_addr;
|
attr.final_icm_addr = nic_dmn->drop_icm_addr;
|
||||||
|
attr.hit_gvmi = nic_dmn->drop_icm_addr >> 48;
|
||||||
break;
|
break;
|
||||||
case DR_ACTION_TYP_FT:
|
case DR_ACTION_TYP_FT:
|
||||||
dest_action = action;
|
dest_action = action;
|
||||||
@ -761,11 +762,17 @@ int mlx5dr_actions_build_ste_arr(struct mlx5dr_matcher *matcher,
|
|||||||
action->sampler->tx_icm_addr;
|
action->sampler->tx_icm_addr;
|
||||||
break;
|
break;
|
||||||
case DR_ACTION_TYP_VPORT:
|
case DR_ACTION_TYP_VPORT:
|
||||||
attr.hit_gvmi = action->vport->caps->vhca_gvmi;
|
if (unlikely(rx_rule && action->vport->caps->num == MLX5_VPORT_UPLINK)) {
|
||||||
dest_action = action;
|
/* can't go to uplink on RX rule - dropping instead */
|
||||||
attr.final_icm_addr = rx_rule ?
|
attr.final_icm_addr = nic_dmn->drop_icm_addr;
|
||||||
action->vport->caps->icm_address_rx :
|
attr.hit_gvmi = nic_dmn->drop_icm_addr >> 48;
|
||||||
action->vport->caps->icm_address_tx;
|
} else {
|
||||||
|
attr.hit_gvmi = action->vport->caps->vhca_gvmi;
|
||||||
|
dest_action = action;
|
||||||
|
attr.final_icm_addr = rx_rule ?
|
||||||
|
action->vport->caps->icm_address_rx :
|
||||||
|
action->vport->caps->icm_address_tx;
|
||||||
|
}
|
||||||
break;
|
break;
|
||||||
case DR_ACTION_TYP_POP_VLAN:
|
case DR_ACTION_TYP_POP_VLAN:
|
||||||
if (!rx_rule && !(dmn->ste_ctx->actions_caps &
|
if (!rx_rule && !(dmn->ste_ctx->actions_caps &
|
||||||
|
@ -7166,6 +7166,9 @@ int stmmac_dvr_probe(struct device *device,
|
|||||||
dev_err(priv->device, "unable to bring out of ahb reset: %pe\n",
|
dev_err(priv->device, "unable to bring out of ahb reset: %pe\n",
|
||||||
ERR_PTR(ret));
|
ERR_PTR(ret));
|
||||||
|
|
||||||
|
/* Wait a bit for the reset to take effect */
|
||||||
|
udelay(10);
|
||||||
|
|
||||||
/* Init MAC and get the capabilities */
|
/* Init MAC and get the capabilities */
|
||||||
ret = stmmac_hw_init(priv);
|
ret = stmmac_hw_init(priv);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
@ -221,21 +221,25 @@ static int fjes_hw_setup(struct fjes_hw *hw)
|
|||||||
|
|
||||||
mem_size = FJES_DEV_REQ_BUF_SIZE(hw->max_epid);
|
mem_size = FJES_DEV_REQ_BUF_SIZE(hw->max_epid);
|
||||||
hw->hw_info.req_buf = kzalloc(mem_size, GFP_KERNEL);
|
hw->hw_info.req_buf = kzalloc(mem_size, GFP_KERNEL);
|
||||||
if (!(hw->hw_info.req_buf))
|
if (!(hw->hw_info.req_buf)) {
|
||||||
return -ENOMEM;
|
result = -ENOMEM;
|
||||||
|
goto free_ep_info;
|
||||||
|
}
|
||||||
|
|
||||||
hw->hw_info.req_buf_size = mem_size;
|
hw->hw_info.req_buf_size = mem_size;
|
||||||
|
|
||||||
mem_size = FJES_DEV_RES_BUF_SIZE(hw->max_epid);
|
mem_size = FJES_DEV_RES_BUF_SIZE(hw->max_epid);
|
||||||
hw->hw_info.res_buf = kzalloc(mem_size, GFP_KERNEL);
|
hw->hw_info.res_buf = kzalloc(mem_size, GFP_KERNEL);
|
||||||
if (!(hw->hw_info.res_buf))
|
if (!(hw->hw_info.res_buf)) {
|
||||||
return -ENOMEM;
|
result = -ENOMEM;
|
||||||
|
goto free_req_buf;
|
||||||
|
}
|
||||||
|
|
||||||
hw->hw_info.res_buf_size = mem_size;
|
hw->hw_info.res_buf_size = mem_size;
|
||||||
|
|
||||||
result = fjes_hw_alloc_shared_status_region(hw);
|
result = fjes_hw_alloc_shared_status_region(hw);
|
||||||
if (result)
|
if (result)
|
||||||
return result;
|
goto free_res_buf;
|
||||||
|
|
||||||
hw->hw_info.buffer_share_bit = 0;
|
hw->hw_info.buffer_share_bit = 0;
|
||||||
hw->hw_info.buffer_unshare_reserve_bit = 0;
|
hw->hw_info.buffer_unshare_reserve_bit = 0;
|
||||||
@ -246,11 +250,11 @@ static int fjes_hw_setup(struct fjes_hw *hw)
|
|||||||
|
|
||||||
result = fjes_hw_alloc_epbuf(&buf_pair->tx);
|
result = fjes_hw_alloc_epbuf(&buf_pair->tx);
|
||||||
if (result)
|
if (result)
|
||||||
return result;
|
goto free_epbuf;
|
||||||
|
|
||||||
result = fjes_hw_alloc_epbuf(&buf_pair->rx);
|
result = fjes_hw_alloc_epbuf(&buf_pair->rx);
|
||||||
if (result)
|
if (result)
|
||||||
return result;
|
goto free_epbuf;
|
||||||
|
|
||||||
spin_lock_irqsave(&hw->rx_status_lock, flags);
|
spin_lock_irqsave(&hw->rx_status_lock, flags);
|
||||||
fjes_hw_setup_epbuf(&buf_pair->tx, mac,
|
fjes_hw_setup_epbuf(&buf_pair->tx, mac,
|
||||||
@ -273,6 +277,25 @@ static int fjes_hw_setup(struct fjes_hw *hw)
|
|||||||
fjes_hw_init_command_registers(hw, ¶m);
|
fjes_hw_init_command_registers(hw, ¶m);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
free_epbuf:
|
||||||
|
for (epidx = 0; epidx < hw->max_epid ; epidx++) {
|
||||||
|
if (epidx == hw->my_epid)
|
||||||
|
continue;
|
||||||
|
fjes_hw_free_epbuf(&hw->ep_shm_info[epidx].tx);
|
||||||
|
fjes_hw_free_epbuf(&hw->ep_shm_info[epidx].rx);
|
||||||
|
}
|
||||||
|
fjes_hw_free_shared_status_region(hw);
|
||||||
|
free_res_buf:
|
||||||
|
kfree(hw->hw_info.res_buf);
|
||||||
|
hw->hw_info.res_buf = NULL;
|
||||||
|
free_req_buf:
|
||||||
|
kfree(hw->hw_info.req_buf);
|
||||||
|
hw->hw_info.req_buf = NULL;
|
||||||
|
free_ep_info:
|
||||||
|
kfree(hw->ep_shm_info);
|
||||||
|
hw->ep_shm_info = NULL;
|
||||||
|
return result;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void fjes_hw_cleanup(struct fjes_hw *hw)
|
static void fjes_hw_cleanup(struct fjes_hw *hw)
|
||||||
|
@ -44,7 +44,7 @@
|
|||||||
|
|
||||||
static unsigned int ring_size __ro_after_init = 128;
|
static unsigned int ring_size __ro_after_init = 128;
|
||||||
module_param(ring_size, uint, 0444);
|
module_param(ring_size, uint, 0444);
|
||||||
MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)");
|
MODULE_PARM_DESC(ring_size, "Ring buffer size (# of 4K pages)");
|
||||||
unsigned int netvsc_ring_bytes __ro_after_init;
|
unsigned int netvsc_ring_bytes __ro_after_init;
|
||||||
|
|
||||||
static const u32 default_msg = NETIF_MSG_DRV | NETIF_MSG_PROBE |
|
static const u32 default_msg = NETIF_MSG_DRV | NETIF_MSG_PROBE |
|
||||||
@ -2801,7 +2801,7 @@ static int __init netvsc_drv_init(void)
|
|||||||
pr_info("Increased ring_size to %u (min allowed)\n",
|
pr_info("Increased ring_size to %u (min allowed)\n",
|
||||||
ring_size);
|
ring_size);
|
||||||
}
|
}
|
||||||
netvsc_ring_bytes = ring_size * PAGE_SIZE;
|
netvsc_ring_bytes = VMBUS_RING_SIZE(ring_size * 4096);
|
||||||
|
|
||||||
register_netdevice_notifier(&netvsc_netdev_notifier);
|
register_netdevice_notifier(&netvsc_netdev_notifier);
|
||||||
|
|
||||||
|
@ -120,6 +120,11 @@
|
|||||||
*/
|
*/
|
||||||
#define LAN8814_1PPM_FORMAT 17179
|
#define LAN8814_1PPM_FORMAT 17179
|
||||||
|
|
||||||
|
#define PTP_RX_VERSION 0x0248
|
||||||
|
#define PTP_TX_VERSION 0x0288
|
||||||
|
#define PTP_MAX_VERSION(x) (((x) & GENMASK(7, 0)) << 8)
|
||||||
|
#define PTP_MIN_VERSION(x) ((x) & GENMASK(7, 0))
|
||||||
|
|
||||||
#define PTP_RX_MOD 0x024F
|
#define PTP_RX_MOD 0x024F
|
||||||
#define PTP_RX_MOD_BAD_UDPV4_CHKSUM_FORCE_FCS_DIS_ BIT(3)
|
#define PTP_RX_MOD_BAD_UDPV4_CHKSUM_FORCE_FCS_DIS_ BIT(3)
|
||||||
#define PTP_RX_TIMESTAMP_EN 0x024D
|
#define PTP_RX_TIMESTAMP_EN 0x024D
|
||||||
@ -2922,6 +2927,12 @@ static void lan8814_ptp_init(struct phy_device *phydev)
|
|||||||
lanphy_write_page_reg(phydev, 5, PTP_TX_PARSE_IP_ADDR_EN, 0);
|
lanphy_write_page_reg(phydev, 5, PTP_TX_PARSE_IP_ADDR_EN, 0);
|
||||||
lanphy_write_page_reg(phydev, 5, PTP_RX_PARSE_IP_ADDR_EN, 0);
|
lanphy_write_page_reg(phydev, 5, PTP_RX_PARSE_IP_ADDR_EN, 0);
|
||||||
|
|
||||||
|
/* Disable checking for minorVersionPTP field */
|
||||||
|
lanphy_write_page_reg(phydev, 5, PTP_RX_VERSION,
|
||||||
|
PTP_MAX_VERSION(0xff) | PTP_MIN_VERSION(0x0));
|
||||||
|
lanphy_write_page_reg(phydev, 5, PTP_TX_VERSION,
|
||||||
|
PTP_MAX_VERSION(0xff) | PTP_MIN_VERSION(0x0));
|
||||||
|
|
||||||
skb_queue_head_init(&ptp_priv->tx_queue);
|
skb_queue_head_init(&ptp_priv->tx_queue);
|
||||||
skb_queue_head_init(&ptp_priv->rx_queue);
|
skb_queue_head_init(&ptp_priv->rx_queue);
|
||||||
INIT_LIST_HEAD(&ptp_priv->rx_ts_list);
|
INIT_LIST_HEAD(&ptp_priv->rx_ts_list);
|
||||||
|
@ -1622,13 +1622,19 @@ static int tun_xdp_act(struct tun_struct *tun, struct bpf_prog *xdp_prog,
|
|||||||
switch (act) {
|
switch (act) {
|
||||||
case XDP_REDIRECT:
|
case XDP_REDIRECT:
|
||||||
err = xdp_do_redirect(tun->dev, xdp, xdp_prog);
|
err = xdp_do_redirect(tun->dev, xdp, xdp_prog);
|
||||||
if (err)
|
if (err) {
|
||||||
|
dev_core_stats_rx_dropped_inc(tun->dev);
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
|
dev_sw_netstats_rx_add(tun->dev, xdp->data_end - xdp->data);
|
||||||
break;
|
break;
|
||||||
case XDP_TX:
|
case XDP_TX:
|
||||||
err = tun_xdp_tx(tun->dev, xdp);
|
err = tun_xdp_tx(tun->dev, xdp);
|
||||||
if (err < 0)
|
if (err < 0) {
|
||||||
|
dev_core_stats_rx_dropped_inc(tun->dev);
|
||||||
return err;
|
return err;
|
||||||
|
}
|
||||||
|
dev_sw_netstats_rx_add(tun->dev, xdp->data_end - xdp->data);
|
||||||
break;
|
break;
|
||||||
case XDP_PASS:
|
case XDP_PASS:
|
||||||
break;
|
break;
|
||||||
|
@ -1082,7 +1082,7 @@ static int iwl_dbg_tlv_override_trig_node(struct iwl_fw_runtime *fwrt,
|
|||||||
node_trig = (void *)node_tlv->data;
|
node_trig = (void *)node_tlv->data;
|
||||||
}
|
}
|
||||||
|
|
||||||
memcpy(node_trig->data + offset, trig->data, trig_data_len);
|
memcpy((u8 *)node_trig->data + offset, trig->data, trig_data_len);
|
||||||
node_tlv->length = cpu_to_le32(size);
|
node_tlv->length = cpu_to_le32(size);
|
||||||
|
|
||||||
if (policy & IWL_FW_INI_APPLY_POLICY_OVERRIDE_CFG) {
|
if (policy & IWL_FW_INI_APPLY_POLICY_OVERRIDE_CFG) {
|
||||||
|
@ -1226,12 +1226,12 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
|
|||||||
* value of the frequency. In such a case, do not abort but
|
* value of the frequency. In such a case, do not abort but
|
||||||
* configure the hardware to the desired frequency forcefully.
|
* configure the hardware to the desired frequency forcefully.
|
||||||
*/
|
*/
|
||||||
forced = opp_table->rate_clk_single != target_freq;
|
forced = opp_table->rate_clk_single != freq;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = _set_opp(dev, opp_table, opp, &target_freq, forced);
|
ret = _set_opp(dev, opp_table, opp, &freq, forced);
|
||||||
|
|
||||||
if (target_freq)
|
if (freq)
|
||||||
dev_pm_opp_put(opp);
|
dev_pm_opp_put(opp);
|
||||||
|
|
||||||
put_opp_table:
|
put_opp_table:
|
||||||
|
@ -238,7 +238,7 @@ static int __init power_init(void)
|
|||||||
if (running_on_qemu && soft_power_reg)
|
if (running_on_qemu && soft_power_reg)
|
||||||
register_sys_off_handler(SYS_OFF_MODE_POWER_OFF, SYS_OFF_PRIO_DEFAULT,
|
register_sys_off_handler(SYS_OFF_MODE_POWER_OFF, SYS_OFF_PRIO_DEFAULT,
|
||||||
qemu_power_off, (void *)soft_power_reg);
|
qemu_power_off, (void *)soft_power_reg);
|
||||||
else
|
if (!running_on_qemu || soft_power_reg)
|
||||||
power_task = kthread_run(kpowerswd, (void*)soft_power_reg,
|
power_task = kthread_run(kpowerswd, (void*)soft_power_reg,
|
||||||
KTHREAD_NAME);
|
KTHREAD_NAME);
|
||||||
if (IS_ERR(power_task)) {
|
if (IS_ERR(power_task)) {
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user