This is the 5.4.253 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTWBWgACgkQONu9yGCS
 aT66Iw//TwAjMECCqJ84moMMA7/fC8QrRiBLWz24f6sVGqMb3vZCiQ91Z4zEZID6
 qV06RRlk08aJqhhllWYE6mqZJZTfmGgjEWjM0OL/bHFgU3TtHC0mR5mCtoUzFTzD
 bIZb6mj8egPDgAP55Sn0/Va0jR5Y4Mp2IFdbtu68J4jy/N4aDE1nTljQamMjhoiV
 JuUVf5XZsZ+4k6kSF01TIaJCDLjij9aSBbNltC0BrfzVIEj19leBb7x4slu6VGIp
 QGkPTySjRw1xRdBUTZ/uJzXqMIqBM0A0x9M9cd97vDNWrp6Qi9G6YeBh6D7X9x++
 zy+Y1CusgH7M/nE/hOFPmgcqfJZfyf1Fa3fIa31+cMKIANg7G2dg+Gd4xxnL0FgA
 BSR2oSC5rzUK8X2/nMaduwQNMPQr8Q0vX5+KRnJB964swBvbPLplC5+NpYf0RKHD
 +bgkwN7Yxn2JqBWLkoGR9u6Mtyx0UclEVU0wKYAEwph3FLKlbiZjRPJdSa2p6gdd
 UZiMgVyTSGOlpbM31fG52RyLoePFxc7vfR/jmyVaYMUPB5xjMi355Rzxcm8VgmIi
 DArs/XUHeHeIyHRr6l6xlsx/2ihrENbO8ux9v07/jWMN/tzc5qEKZ1RmLRaaWwf7
 3A+cTGMpRwznf3DxJoAFRiC6VhezJsa/BUHaTvSYki0OSxOJ/BM=
 =Bk55
 -----END PGP SIGNATURE-----

Merge 5.4.253 into android11-5.4-lts

Changes in 5.4.253
	jbd2: fix incorrect code style
	jbd2: fix kernel-doc markups
	jbd2: remove redundant buffer io error checks
	jbd2: recheck chechpointing non-dirty buffer
	jbd2: Fix wrongly judgement for buffer head removing while doing checkpoint
	gpio: tps68470: Make tps68470_gpio_output() always set the initial value
	bcache: remove 'int n' from parameter list of bch_bucket_alloc_set()
	bcache: Fix __bch_btree_node_alloc to make the failure behavior consistent
	btrfs: qgroup: catch reserved space leaks at unmount time
	btrfs: fix race between quota disable and relocation
	btrfs: fix extent buffer leak after tree mod log failure at split_node()
	ext4: rename journal_dev to s_journal_dev inside ext4_sb_info
	ext4: Fix reusing stale buffer heads from last failed mounting
	PCI/ASPM: Return 0 or -ETIMEDOUT from pcie_retrain_link()
	PCI/ASPM: Factor out pcie_wait_for_retrain()
	PCI/ASPM: Avoid link retraining race
	dlm: cleanup plock_op vs plock_xop
	dlm: rearrange async condition return
	fs: dlm: interrupt posix locks only when process is killed
	ftrace: Add information on number of page groups allocated
	ftrace: Check if pages were allocated before calling free_pages()
	ftrace: Store the order of pages allocated in ftrace_page
	ftrace: Fix possible warning on checking all pages used in ftrace_process_locs()
	pwm: meson: Remove redundant assignment to variable fin_freq
	pwm: meson: Simplify duplicated per-channel tracking
	pwm: meson: fix handling of period/duty if greater than UINT_MAX
	scsi: qla2xxx: Fix inconsistent format argument type in qla_os.c
	scsi: qla2xxx: Array index may go out of bound
	uapi: General notification queue definitions
	keys: Fix linking a duplicate key to a keyring's assoc_array
	ext4: fix to check return value of freeze_bdev() in ext4_shutdown()
	i40e: Fix an NULL vs IS_ERR() bug for debugfs_create_dir()
	vxlan: calculate correct header length for GPE
	phy: hisilicon: Fix an out of bounds check in hisi_inno_phy_probe()
	ethernet: atheros: fix return value check in atl1e_tso_csum()
	ipv6 addrconf: fix bug where deleting a mngtmpaddr can create a new temporary address
	tcp: Reduce chance of collisions in inet6_hashfn().
	bonding: reset bond's flags when down link is P2P device
	team: reset team's flags when down link is P2P device
	platform/x86: msi-laptop: Fix rfkill out-of-sync on MSI Wind U100
	net/sched: mqprio: refactor nlattr parsing to a separate function
	net/sched: mqprio: add extack to mqprio_parse_nlattr()
	net/sched: mqprio: Add length check for TCA_MQPRIO_{MAX/MIN}_RATE64
	benet: fix return value check in be_lancer_xmit_workarounds()
	RDMA/mlx4: Make check for invalid flags stricter
	drm/msm/dpu: drop enum dpu_core_perf_data_bus_id
	drm/msm/adreno: Fix snapshot BINDLESS_DATA size
	drm/msm: Fix IS_ERR_OR_NULL() vs NULL check in a5xx_submit_in_rb()
	ASoC: fsl_spdif: Silence output on stop
	block: Fix a source code comment in include/uapi/linux/blkzoned.h
	dm raid: fix missing reconfig_mutex unlock in raid_ctr() error paths
	ata: pata_ns87415: mark ns87560_tf_read static
	ring-buffer: Fix wrong stat of cpu_buffer->read
	tracing: Fix warning in trace_buffered_event_disable()
	serial: 8250_dw: Preserve original value of DLF register
	serial: sifive: Fix sifive_serial_console_setup() section
	USB: serial: option: support Quectel EM060K_128
	USB: serial: option: add Quectel EC200A module support
	USB: serial: simple: add Kaufmann RKS+CAN VCP
	USB: serial: simple: sort driver entries
	can: gs_usb: gs_can_close(): add missing set of CAN state to CAN_STATE_STOPPED
	Revert "usb: dwc3: core: Enable AutoRetry feature in the controller"
	usb: dwc3: pci: skip BYT GPIO lookup table for hardwired phy
	usb: dwc3: don't reset device side if dwc3 was configured as host-only
	usb: ohci-at91: Fix the unhandle interrupt when resume
	USB: quirks: add quirk for Focusrite Scarlett
	usb: xhci-mtk: set the dma max_seg_size
	Revert "usb: xhci: tegra: Fix error check"
	Documentation: security-bugs.rst: update preferences when dealing with the linux-distros group
	Documentation: security-bugs.rst: clarify CVE handling
	staging: ks7010: potential buffer overflow in ks_wlan_set_encode_ext()
	hwmon: (nct7802) Fix for temp6 (PECI1) processed even if PECI1 disabled
	btrfs: check for commit error at btrfs_attach_transaction_barrier()
	tpm_tis: Explicitly check for error code
	irq-bcm6345-l1: Do not assume a fixed block to cpu mapping
	btrfs: check if the transaction was aborted at btrfs_wait_for_commit()
	virtio-net: fix race between set queues and probe
	s390/dasd: fix hanging device after quiesce/resume
	ASoC: wm8904: Fill the cache for WM8904_ADC_TEST_0 register
	dm cache policy smq: ensure IO doesn't prevent cleaner policy progress
	ACPI: processor: perflib: Use the "no limit" frequency QoS
	ACPI: processor: perflib: Avoid updating frequency QoS unnecessarily
	cpufreq: intel_pstate: Drop ACPI _PSS states table patching
	btrfs: qgroup: remove one-time use variables for quota_root checks
	btrfs: qgroup: return ENOTCONN instead of EINVAL when quotas are not enabled
	btrfs: fix race between quota disable and quota assign ioctls
	net/sched: sch_qfq: account for stab overhead in qfq_enqueue
	ASoC: cs42l51: fix driver to properly autoload with automatic module loading
	arm64: Add AMPERE1 to the Spectre-BHB affected list
	arm64: Fix bit-shifting UB in the MIDR_CPU_MODEL() macro
	perf: Fix function pointer case
	loop: Select I/O scheduler 'none' from inside add_disk()
	word-at-a-time: use the same return type for has_zero regardless of endianness
	KVM: s390: fix sthyi error handling
	net/mlx5: DR, fix memory leak in mlx5dr_cmd_create_reformat_ctx
	net/mlx5e: fix return value check in mlx5e_ipsec_remove_trailer()
	rtnetlink: let rtnl_bridge_setlink checks IFLA_BRIDGE_MODE length
	perf test uprobe_from_different_cu: Skip if there is no gcc
	net: sched: cls_u32: Fix match key mis-addressing
	mISDN: hfcpci: Fix potential deadlock on &hc->lock
	net: annotate data-races around sk->sk_max_pacing_rate
	net: add missing READ_ONCE(sk->sk_rcvlowat) annotation
	net: add missing READ_ONCE(sk->sk_sndbuf) annotation
	net: add missing READ_ONCE(sk->sk_rcvbuf) annotation
	net: add missing data-race annotations around sk->sk_peek_off
	net: add missing data-race annotation for sk_ll_usec
	net/sched: cls_u32: No longer copy tcf_result on update to avoid use-after-free
	net/sched: cls_fw: No longer copy tcf_result on update to avoid use-after-free
	net/sched: cls_route: No longer copy tcf_result on update to avoid use-after-free
	bpf: sockmap: Remove preempt_disable in sock_map_sk_acquire
	driver core: add device probe log helper
	net: ll_temac: Switch to use dev_err_probe() helper
	net: ll_temac: fix error checking of irq_of_parse_and_map()
	net: dcb: choose correct policy to parse DCB_ATTR_BCN
	ip6mr: Fix skb_under_panic in ip6mr_cache_report()
	tcp_metrics: fix addr_same() helper
	tcp_metrics: annotate data-races around tm->tcpm_stamp
	tcp_metrics: annotate data-races around tm->tcpm_lock
	tcp_metrics: annotate data-races around tm->tcpm_vals[]
	tcp_metrics: annotate data-races around tm->tcpm_net
	tcp_metrics: fix data-race in tcpm_suck_dst() vs fastopen
	scsi: zfcp: Defer fc_rport blocking until after ADISC response
	libceph: fix potential hang in ceph_osdc_notify()
	USB: zaurus: Add ID for A-300/B-500/C-700
	mtd: spinand: toshiba: Fix ecc_get_status
	mtd: rawnand: meson: fix OOB available bytes for ECC
	net: tun_chr_open(): set sk_uid from current_fsuid()
	net: tap_open(): set sk_uid from current_fsuid()
	fs/sysv: Null check to prevent null-ptr-deref bug
	Bluetooth: L2CAP: Fix use-after-free in l2cap_sock_ready_cb
	net: usbnet: Fix WARNING in usbnet_start_xmit/usb_submit_urb
	fs: Protect reconfiguration of sb read-write from racing writes
	ext2: Drop fragment support
	test_firmware: prevent race conditions by a correct implementation of locking
	test_firmware: return ENOMEM instead of ENOSPC on failed memory allocation
	mtd: rawnand: omap_elm: Fix incorrect type in assignment
	powerpc/mm/altmap: Fix altmap boundary check
	selftests/rseq: check if libc rseq support is registered
	selftests/rseq: Play nice with binaries statically linked against glibc 2.35+
	PM / wakeirq: support enabling wake-up irq after runtime_suspend called
	PM: sleep: wakeirq: fix wake irq arming
	ceph: show tasks waiting on caps in debugfs caps file
	ceph: use kill_anon_super helper
	ceph: defer stopping mdsc delayed_work
	arm64: dts: stratix10: fix incorrect I2C property for SCL signal
	ARM: dts: imx6sll: Make ssi node name same as other platforms
	ARM: dts: imx: Align L2 cache-controller nodename with dtschema
	ARM: dts: imx: add usb alias
	ARM: dts: imx6sll: fixup of operating points
	ARM: dts: nxp/imx6sll: fix wrong property name in usbphy node
	driver core: Annotate dev_err_probe() with __must_check
	driver code: print symbolic error code
	drivers: core: fix kernel-doc markup for dev_err_probe()
	Revert "driver core: Annotate dev_err_probe() with __must_check"
	Linux 5.4.253

Change-Id: I9c8d2b7250a3bcd3cb368c9d9e362a82c2fa5159
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman 2023-08-23 15:06:20 +00:00
commit 279267442f
140 changed files with 1443 additions and 831 deletions

View File

@ -56,31 +56,28 @@ information submitted to the security list and any followup discussions
of the report are treated confidentially even after the embargo has been
lifted, in perpetuity.
Coordination
------------
Coordination with other groups
------------------------------
Fixes for sensitive bugs, such as those that might lead to privilege
escalations, may need to be coordinated with the private
<linux-distros@vs.openwall.org> mailing list so that distribution vendors
are well prepared to issue a fixed kernel upon public disclosure of the
upstream fix. Distros will need some time to test the proposed patch and
will generally request at least a few days of embargo, and vendor update
publication prefers to happen Tuesday through Thursday. When appropriate,
the security team can assist with this coordination, or the reporter can
include linux-distros from the start. In this case, remember to prefix
the email Subject line with "[vs]" as described in the linux-distros wiki:
<http://oss-security.openwall.org/wiki/mailing-lists/distros#how-to-use-the-lists>
The kernel security team strongly recommends that reporters of potential
security issues NEVER contact the "linux-distros" mailing list until
AFTER discussing it with the kernel security team. Do not Cc: both
lists at once. You may contact the linux-distros mailing list after a
fix has been agreed on and you fully understand the requirements that
doing so will impose on you and the kernel community.
The different lists have different goals and the linux-distros rules do
not contribute to actually fixing any potential security problems.
CVE assignment
--------------
The security team does not normally assign CVEs, nor do we require them
for reports or fixes, as this can needlessly complicate the process and
may delay the bug handling. If a reporter wishes to have a CVE identifier
assigned ahead of public disclosure, they will need to contact the private
linux-distros list, described above. When such a CVE identifier is known
before a patch is provided, it is desirable to mention it in the commit
message if the reporter agrees.
The security team does not assign CVEs, nor do we require them for
reports or fixes, as this can needlessly complicate the process and may
delay the bug handling. If a reporter wishes to have a CVE identifier
assigned, they should find one by themselves, for example by contacting
MITRE directly. However under no circumstances will a patch inclusion
be delayed to wait for a CVE identifier to arrive.
Non-disclosure agreements
-------------------------

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 4
SUBLEVEL = 252
SUBLEVEL = 253
EXTRAVERSION =
NAME = Kleptomaniac Octopus

View File

@ -59,7 +59,7 @@
interrupt-parent = <&avic>;
ranges;
L2: l2-cache@30000000 {
L2: cache-controller@30000000 {
compatible = "arm,l210-cache";
reg = <0x30000000 0x1000>;
cache-unified;

View File

@ -45,6 +45,10 @@
spi1 = &ecspi2;
spi2 = &ecspi3;
spi3 = &ecspi4;
usb0 = &usbotg;
usb1 = &usbh1;
usb2 = &usbh2;
usb3 = &usbh3;
usbphy0 = &usbphy1;
usbphy1 = &usbphy2;
};
@ -255,7 +259,7 @@
interrupt-parent = <&intc>;
};
L2: l2-cache@a02000 {
L2: cache-controller@a02000 {
compatible = "arm,pl310-cache";
reg = <0x00a02000 0x1000>;
interrupts = <0 92 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -39,6 +39,9 @@
spi1 = &ecspi2;
spi2 = &ecspi3;
spi3 = &ecspi4;
usb0 = &usbotg1;
usb1 = &usbotg2;
usb2 = &usbh;
usbphy0 = &usbphy1;
usbphy1 = &usbphy2;
};
@ -136,7 +139,7 @@
interrupt-parent = <&intc>;
};
L2: l2-cache@a02000 {
L2: cache-controller@a02000 {
compatible = "arm,pl310-cache";
reg = <0x00a02000 0x1000>;
interrupts = <0 92 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -36,6 +36,8 @@
spi1 = &ecspi2;
spi3 = &ecspi3;
spi4 = &ecspi4;
usb0 = &usbotg1;
usb1 = &usbotg2;
usbphy0 = &usbphy1;
usbphy1 = &usbphy2;
};
@ -49,20 +51,18 @@
device_type = "cpu";
reg = <0>;
next-level-cache = <&L2>;
operating-points = <
operating-points =
/* kHz uV */
996000 1275000
792000 1175000
396000 1075000
198000 975000
>;
fsl,soc-operating-points = <
<996000 1275000>,
<792000 1175000>,
<396000 1075000>,
<198000 975000>;
fsl,soc-operating-points =
/* ARM kHz SOC-PU uV */
996000 1175000
792000 1175000
396000 1175000
198000 1175000
>;
<996000 1175000>,
<792000 1175000>,
<396000 1175000>,
<198000 1175000>;
clock-latency = <61036>; /* two CLK32 periods */
#cooling-cells = <2>;
clocks = <&clks IMX6SLL_CLK_ARM>,
@ -137,7 +137,7 @@
interrupt-parent = <&intc>;
};
L2: l2-cache@a02000 {
L2: cache-controller@a02000 {
compatible = "arm,pl310-cache";
reg = <0x00a02000 0x1000>;
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;
@ -272,7 +272,7 @@
status = "disabled";
};
ssi1: ssi-controller@2028000 {
ssi1: ssi@2028000 {
compatible = "fsl,imx6sl-ssi", "fsl,imx51-ssi";
reg = <0x02028000 0x4000>;
interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
@ -285,7 +285,7 @@
status = "disabled";
};
ssi2: ssi-controller@202c000 {
ssi2: ssi@202c000 {
compatible = "fsl,imx6sl-ssi", "fsl,imx51-ssi";
reg = <0x0202c000 0x4000>;
interrupts = <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>;
@ -298,7 +298,7 @@
status = "disabled";
};
ssi3: ssi-controller@2030000 {
ssi3: ssi@2030000 {
compatible = "fsl,imx6sl-ssi", "fsl,imx51-ssi";
reg = <0x02030000 0x4000>;
interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
@ -550,7 +550,7 @@
reg = <0x020ca000 0x1000>;
interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clks IMX6SLL_CLK_USBPHY2>;
phy-reg_3p0-supply = <&reg_3p0>;
phy-3p0-supply = <&reg_3p0>;
fsl,anatop = <&anatop>;
};

View File

@ -49,6 +49,9 @@
spi2 = &ecspi3;
spi3 = &ecspi4;
spi4 = &ecspi5;
usb0 = &usbotg1;
usb1 = &usbotg2;
usb2 = &usbh;
usbphy0 = &usbphy1;
usbphy1 = &usbphy2;
};
@ -187,7 +190,7 @@
interrupt-parent = <&intc>;
};
L2: l2-cache@a02000 {
L2: cache-controller@a02000 {
compatible = "arm,pl310-cache";
reg = <0x00a02000 0x1000>;
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -47,6 +47,8 @@
spi1 = &ecspi2;
spi2 = &ecspi3;
spi3 = &ecspi4;
usb0 = &usbotg1;
usb1 = &usbotg2;
usbphy0 = &usbphy1;
usbphy1 = &usbphy2;
};

View File

@ -7,6 +7,12 @@
#include <dt-bindings/reset/imx7-reset.h>
/ {
aliases {
usb0 = &usbotg1;
usb1 = &usbotg2;
usb2 = &usbh;
};
cpus {
cpu0: cpu@0 {
clock-frequency = <996000000>;

View File

@ -47,6 +47,8 @@
spi1 = &ecspi2;
spi2 = &ecspi3;
spi3 = &ecspi4;
usb0 = &usbotg1;
usb1 = &usbh;
};
cpus {

View File

@ -129,7 +129,7 @@
status = "okay";
clock-frequency = <100000>;
i2c-sda-falling-time-ns = <890>; /* hcnt */
i2c-sdl-falling-time-ns = <890>; /* lcnt */
i2c-scl-falling-time-ns = <890>; /* lcnt */
adc@14 {
compatible = "lltc,ltc2497";

View File

@ -41,7 +41,7 @@
(((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT)
#define MIDR_CPU_MODEL(imp, partnum) \
(((imp) << MIDR_IMPLEMENTOR_SHIFT) | \
((_AT(u32, imp) << MIDR_IMPLEMENTOR_SHIFT) | \
(0xf << MIDR_ARCHITECTURE_SHIFT) | \
((partnum) << MIDR_PARTNUM_SHIFT))
@ -59,6 +59,7 @@
#define ARM_CPU_IMP_NVIDIA 0x4E
#define ARM_CPU_IMP_FUJITSU 0x46
#define ARM_CPU_IMP_HISI 0x48
#define ARM_CPU_IMP_AMPERE 0xC0
#define ARM_CPU_PART_AEM_V8 0xD0F
#define ARM_CPU_PART_FOUNDATION 0xD00
@ -101,6 +102,8 @@
#define HISI_CPU_PART_TSV110 0xD01
#define AMPERE_CPU_PART_AMPERE1 0xAC3
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
#define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
@ -131,6 +134,7 @@
#define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL)
#define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX)
#define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
/* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
#define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX

View File

@ -1145,6 +1145,10 @@ u8 spectre_bhb_loop_affected(int scope)
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
{},
};
static const struct midr_range spectre_bhb_k11_list[] = {
MIDR_ALL_VERSIONS(MIDR_AMPERE1),
{},
};
static const struct midr_range spectre_bhb_k8_list[] = {
MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
@ -1155,6 +1159,8 @@ u8 spectre_bhb_loop_affected(int scope)
k = 32;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
k = 24;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
k = 11;
else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
k = 8;

View File

@ -34,7 +34,7 @@ static inline long find_zero(unsigned long mask)
return leading_zero_bits >> 3;
}
static inline bool has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c)
static inline unsigned long has_zero(unsigned long val, unsigned long *data, const struct word_at_a_time *c)
{
unsigned long rhs = val | c->low_bits;
*data = rhs;

View File

@ -279,8 +279,7 @@ void __ref vmemmap_free(unsigned long start, unsigned long end,
start = _ALIGN_DOWN(start, page_size);
if (altmap) {
alt_start = altmap->base_pfn;
alt_end = altmap->base_pfn + altmap->reserve +
altmap->free + altmap->alloc + altmap->align;
alt_end = altmap->base_pfn + altmap->reserve + altmap->free;
}
pr_debug("vmemmap_free %lx...%lx\n", start, end);

View File

@ -460,9 +460,9 @@ static int sthyi_update_cache(u64 *rc)
*
* Fills the destination with system information returned by the STHYI
* instruction. The data is generated by emulation or execution of STHYI,
* if available. The return value is the condition code that would be
* returned, the rc parameter is the return code which is passed in
* register R2 + 1.
* if available. The return value is either a negative error value or
* the condition code that would be returned, the rc parameter is the
* return code which is passed in register R2 + 1.
*/
int sthyi_fill(void *dst, u64 *rc)
{

View File

@ -360,8 +360,8 @@ static int handle_partial_execution(struct kvm_vcpu *vcpu)
*/
int handle_sthyi(struct kvm_vcpu *vcpu)
{
int reg1, reg2, r = 0;
u64 code, addr, cc = 0, rc = 0;
int reg1, reg2, cc = 0, r = 0;
u64 code, addr, rc = 0;
struct sthyi_sctns *sctns = NULL;
if (!test_kvm_facility(vcpu->kvm, 74))
@ -392,7 +392,10 @@ int handle_sthyi(struct kvm_vcpu *vcpu)
return -ENOMEM;
cc = sthyi_fill(sctns, &rc);
if (cc < 0) {
free_page((unsigned long)sctns);
return cc;
}
out:
if (!cc) {
r = write_guest(vcpu, addr, reg2, sctns, PAGE_SIZE);

View File

@ -56,6 +56,8 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
{
acpi_status status = 0;
unsigned long long ppc = 0;
s32 qos_value;
int index;
int ret;
if (!pr)
@ -75,17 +77,30 @@ static int acpi_processor_get_platform_limit(struct acpi_processor *pr)
return -ENODEV;
}
pr_debug("CPU %d: _PPC is %d - frequency %s limited\n", pr->id,
(int)ppc, ppc ? "" : "not");
index = ppc;
pr->performance_platform_limit = (int)ppc;
if (ppc >= pr->performance->state_count ||
unlikely(!freq_qos_request_active(&pr->perflib_req)))
if (pr->performance_platform_limit == index ||
ppc >= pr->performance->state_count)
return 0;
ret = freq_qos_update_request(&pr->perflib_req,
pr->performance->states[ppc].core_frequency * 1000);
pr_debug("CPU %d: _PPC is %d - frequency %s limited\n", pr->id,
index, index ? "is" : "is not");
pr->performance_platform_limit = index;
if (unlikely(!freq_qos_request_active(&pr->perflib_req)))
return 0;
/*
* If _PPC returns 0, it means that all of the available states can be
* used ("no limit").
*/
if (index == 0)
qos_value = FREQ_QOS_MAX_DEFAULT_VALUE;
else
qos_value = pr->performance->states[index].core_frequency * 1000;
ret = freq_qos_update_request(&pr->perflib_req, qos_value);
if (ret < 0) {
pr_warn("Failed to update perflib freq constraint: CPU%d (%d)\n",
pr->id, ret);
@ -168,9 +183,16 @@ void acpi_processor_ppc_init(struct cpufreq_policy *policy)
if (!pr)
continue;
/*
* Reset performance_platform_limit in case there is a stale
* value in it, so as to make it match the "no limit" QoS value
* below.
*/
pr->performance_platform_limit = 0;
ret = freq_qos_add_request(&policy->constraints,
&pr->perflib_req,
FREQ_QOS_MAX, INT_MAX);
&pr->perflib_req, FREQ_QOS_MAX,
FREQ_QOS_MAX_DEFAULT_VALUE);
if (ret < 0)
pr_err("Failed to add freq constraint for CPU%d (%d)\n",
cpu, ret);

View File

@ -261,7 +261,7 @@ static u8 ns87560_check_status(struct ata_port *ap)
* LOCKING:
* Inherited from caller.
*/
void ns87560_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
static void ns87560_tf_read(struct ata_port *ap, struct ata_taskfile *tf)
{
struct ata_ioports *ioaddr = &ap->ioaddr;

View File

@ -4085,6 +4085,50 @@ define_dev_printk_level(_dev_info, KERN_INFO);
#endif
/**
* dev_err_probe - probe error check and log helper
* @dev: the pointer to the struct device
* @err: error value to test
* @fmt: printf-style format string
* @...: arguments as specified in the format string
*
* This helper implements common pattern present in probe functions for error
* checking: print debug or error message depending if the error value is
* -EPROBE_DEFER and propagate error upwards.
* It replaces code sequence::
* if (err != -EPROBE_DEFER)
* dev_err(dev, ...);
* else
* dev_dbg(dev, ...);
* return err;
*
* with::
*
* return dev_err_probe(dev, err, ...);
*
* Returns @err.
*
*/
int dev_err_probe(const struct device *dev, int err, const char *fmt, ...)
{
struct va_format vaf;
va_list args;
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
if (err != -EPROBE_DEFER)
dev_err(dev, "error %pe: %pV", ERR_PTR(err), &vaf);
else
dev_dbg(dev, "error %pe: %pV", ERR_PTR(err), &vaf);
va_end(args);
return err;
}
EXPORT_SYMBOL_GPL(dev_err_probe);
static inline bool fwnode_is_primary(struct fwnode_handle *fwnode)
{
return fwnode && !IS_ERR(fwnode->secondary);

View File

@ -25,8 +25,11 @@ extern u64 pm_runtime_active_time(struct device *dev);
#define WAKE_IRQ_DEDICATED_ALLOCATED BIT(0)
#define WAKE_IRQ_DEDICATED_MANAGED BIT(1)
#define WAKE_IRQ_DEDICATED_REVERSE BIT(2)
#define WAKE_IRQ_DEDICATED_MASK (WAKE_IRQ_DEDICATED_ALLOCATED | \
WAKE_IRQ_DEDICATED_MANAGED)
WAKE_IRQ_DEDICATED_MANAGED | \
WAKE_IRQ_DEDICATED_REVERSE)
#define WAKE_IRQ_DEDICATED_ENABLED BIT(3)
struct wake_irq {
struct device *dev;
@ -39,7 +42,8 @@ extern void dev_pm_arm_wake_irq(struct wake_irq *wirq);
extern void dev_pm_disarm_wake_irq(struct wake_irq *wirq);
extern void dev_pm_enable_wake_irq_check(struct device *dev,
bool can_change_status);
extern void dev_pm_disable_wake_irq_check(struct device *dev);
extern void dev_pm_disable_wake_irq_check(struct device *dev, bool cond_disable);
extern void dev_pm_enable_wake_irq_complete(struct device *dev);
#ifdef CONFIG_PM_SLEEP

View File

@ -659,6 +659,8 @@ static int rpm_suspend(struct device *dev, int rpmflags)
if (retval)
goto fail;
dev_pm_enable_wake_irq_complete(dev);
no_callback:
__update_runtime_status(dev, RPM_SUSPENDED);
pm_runtime_deactivate_timer(dev);
@ -704,7 +706,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
return retval;
fail:
dev_pm_disable_wake_irq_check(dev);
dev_pm_disable_wake_irq_check(dev, true);
__update_runtime_status(dev, RPM_ACTIVE);
dev->power.deferred_resume = false;
wake_up_all(&dev->power.wait_queue);
@ -887,7 +889,7 @@ static int rpm_resume(struct device *dev, int rpmflags)
callback = RPM_GET_CALLBACK(dev, runtime_resume);
dev_pm_disable_wake_irq_check(dev);
dev_pm_disable_wake_irq_check(dev, false);
retval = rpm_callback(callback, dev);
if (retval) {
__update_runtime_status(dev, RPM_SUSPENDED);

View File

@ -145,24 +145,7 @@ static irqreturn_t handle_threaded_wake_irq(int irq, void *_wirq)
return IRQ_HANDLED;
}
/**
* dev_pm_set_dedicated_wake_irq - Request a dedicated wake-up interrupt
* @dev: Device entry
* @irq: Device wake-up interrupt
*
* Unless your hardware has separate wake-up interrupts in addition
* to the device IO interrupts, you don't need this.
*
* Sets up a threaded interrupt handler for a device that has
* a dedicated wake-up interrupt in addition to the device IO
* interrupt.
*
* The interrupt starts disabled, and needs to be managed for
* the device by the bus code or the device driver using
* dev_pm_enable_wake_irq() and dev_pm_disable_wake_irq()
* functions.
*/
int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
static int __dev_pm_set_dedicated_wake_irq(struct device *dev, int irq, unsigned int flag)
{
struct wake_irq *wirq;
int err;
@ -200,7 +183,7 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
if (err)
goto err_free_irq;
wirq->status = WAKE_IRQ_DEDICATED_ALLOCATED;
wirq->status = WAKE_IRQ_DEDICATED_ALLOCATED | flag;
return err;
@ -213,8 +196,57 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
return err;
}
/**
* dev_pm_set_dedicated_wake_irq - Request a dedicated wake-up interrupt
* @dev: Device entry
* @irq: Device wake-up interrupt
*
* Unless your hardware has separate wake-up interrupts in addition
* to the device IO interrupts, you don't need this.
*
* Sets up a threaded interrupt handler for a device that has
* a dedicated wake-up interrupt in addition to the device IO
* interrupt.
*
* The interrupt starts disabled, and needs to be managed for
* the device by the bus code or the device driver using
* dev_pm_enable_wake_irq*() and dev_pm_disable_wake_irq*()
* functions.
*/
int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
{
return __dev_pm_set_dedicated_wake_irq(dev, irq, 0);
}
EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq);
/**
* dev_pm_set_dedicated_wake_irq_reverse - Request a dedicated wake-up interrupt
* with reverse enable ordering
* @dev: Device entry
* @irq: Device wake-up interrupt
*
* Unless your hardware has separate wake-up interrupts in addition
* to the device IO interrupts, you don't need this.
*
* Sets up a threaded interrupt handler for a device that has a dedicated
* wake-up interrupt in addition to the device IO interrupt. It sets
* the status of WAKE_IRQ_DEDICATED_REVERSE to tell rpm_suspend()
* to enable dedicated wake-up interrupt after running the runtime suspend
* callback for @dev.
*
* The interrupt starts disabled, and needs to be managed for
* the device by the bus code or the device driver using
* dev_pm_enable_wake_irq*() and dev_pm_disable_wake_irq*()
* functions.
*/
int dev_pm_set_dedicated_wake_irq_reverse(struct device *dev, int irq)
{
return __dev_pm_set_dedicated_wake_irq(dev, irq, WAKE_IRQ_DEDICATED_REVERSE);
}
EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq_reverse);
/**
* dev_pm_enable_wake_irq - Enable device wake-up interrupt
* @dev: Device
@ -285,25 +317,56 @@ void dev_pm_enable_wake_irq_check(struct device *dev,
return;
enable:
enable_irq(wirq->irq);
if (!can_change_status || !(wirq->status & WAKE_IRQ_DEDICATED_REVERSE)) {
enable_irq(wirq->irq);
wirq->status |= WAKE_IRQ_DEDICATED_ENABLED;
}
}
/**
* dev_pm_disable_wake_irq_check - Checks and disables wake-up interrupt
* @dev: Device
* @cond_disable: if set, also check WAKE_IRQ_DEDICATED_REVERSE
*
* Disables wake-up interrupt conditionally based on status.
* Should be only called from rpm_suspend() and rpm_resume() path.
*/
void dev_pm_disable_wake_irq_check(struct device *dev)
void dev_pm_disable_wake_irq_check(struct device *dev, bool cond_disable)
{
struct wake_irq *wirq = dev->power.wakeirq;
if (!wirq || !((wirq->status & WAKE_IRQ_DEDICATED_MASK)))
return;
if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED)
if (cond_disable && (wirq->status & WAKE_IRQ_DEDICATED_REVERSE))
return;
if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED) {
wirq->status &= ~WAKE_IRQ_DEDICATED_ENABLED;
disable_irq_nosync(wirq->irq);
}
}
/**
* dev_pm_enable_wake_irq_complete - enable wake IRQ not enabled before
* @dev: Device using the wake IRQ
*
* Enable wake IRQ conditionally based on status, mainly used if want to
* enable wake IRQ after running ->runtime_suspend() which depends on
* WAKE_IRQ_DEDICATED_REVERSE.
*
* Should be only called from rpm_suspend() path.
*/
void dev_pm_enable_wake_irq_complete(struct device *dev)
{
struct wake_irq *wirq = dev->power.wakeirq;
if (!wirq || !(wirq->status & WAKE_IRQ_DEDICATED_MASK))
return;
if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED &&
wirq->status & WAKE_IRQ_DEDICATED_REVERSE)
enable_irq(wirq->irq);
}
/**
@ -320,7 +383,7 @@ void dev_pm_arm_wake_irq(struct wake_irq *wirq)
if (device_may_wakeup(wirq->dev)) {
if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&
!pm_runtime_status_suspended(wirq->dev))
!(wirq->status & WAKE_IRQ_DEDICATED_ENABLED))
enable_irq(wirq->irq);
enable_irq_wake(wirq->irq);
@ -343,7 +406,7 @@ void dev_pm_disarm_wake_irq(struct wake_irq *wirq)
disable_irq_wake(wirq->irq);
if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&
!pm_runtime_status_suspended(wirq->dev))
!(wirq->status & WAKE_IRQ_DEDICATED_ENABLED))
disable_irq_nosync(wirq->irq);
}
}

View File

@ -2114,7 +2114,7 @@ static int loop_add(struct loop_device **l, int i)
lo->tag_set.queue_depth = 128;
lo->tag_set.numa_node = NUMA_NO_NODE;
lo->tag_set.cmd_size = sizeof(struct loop_cmd);
lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_NO_SCHED;
lo->tag_set.driver_data = lo;
err = blk_mq_alloc_tag_set(&lo->tag_set);

View File

@ -266,6 +266,7 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
int size = 0;
int status;
u32 expected;
int rc;
if (count < TPM_HEADER_SIZE) {
size = -EIO;
@ -285,8 +286,13 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
goto out;
}
size += recv_data(chip, &buf[TPM_HEADER_SIZE],
expected - TPM_HEADER_SIZE);
rc = recv_data(chip, &buf[TPM_HEADER_SIZE],
expected - TPM_HEADER_SIZE);
if (rc < 0) {
size = rc;
goto out;
}
size += rc;
if (size < expected) {
dev_err(&chip->dev, "Unable to read remainder of result\n");
size = -ETIME;

View File

@ -442,20 +442,6 @@ static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
(u32) cpu->acpi_perf_data.states[i].control);
}
/*
* The _PSS table doesn't contain whole turbo frequency range.
* This just contains +1 MHZ above the max non turbo frequency,
* with control value corresponding to max turbo ratio. But
* when cpufreq set policy is called, it will call with this
* max frequency, which will cause a reduced performance as
* this driver uses real max turbo frequency as the max
* frequency. So correct this frequency in _PSS table to
* correct max turbo frequency based on the turbo state.
* Also need to convert to MHz as _PSS freq is in MHz.
*/
if (!global.turbo_disabled)
cpu->acpi_perf_data.states[0].core_frequency =
policy->cpuinfo.max_freq / 1000;
cpu->valid_pss_table = true;
pr_debug("_PPC limits will be enforced\n");

View File

@ -91,13 +91,13 @@ static int tps68470_gpio_output(struct gpio_chip *gc, unsigned int offset,
struct tps68470_gpio_data *tps68470_gpio = gpiochip_get_data(gc);
struct regmap *regmap = tps68470_gpio->tps68470_regmap;
/* Set the initial value */
tps68470_gpio_set(gc, offset, value);
/* rest are always outputs */
if (offset >= TPS68470_N_REGULAR_GPIO)
return 0;
/* Set the initial value */
tps68470_gpio_set(gc, offset, value);
return regmap_update_bits(regmap, TPS68470_GPIO_CTL_REG_A(offset),
TPS68470_GPIO_MODE_MASK,
TPS68470_GPIO_MODE_OUT_CMOS);

View File

@ -71,7 +71,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit
* since we've already mapped it once in
* submit_reloc()
*/
if (WARN_ON(!ptr))
if (WARN_ON(IS_ERR_OR_NULL(ptr)))
return;
for (i = 0; i < dwords; i++) {

View File

@ -200,7 +200,7 @@ static const struct a6xx_shader_block {
SHADER(A6XX_SP_LB_3_DATA, 0x800),
SHADER(A6XX_SP_LB_4_DATA, 0x800),
SHADER(A6XX_SP_LB_5_DATA, 0x200),
SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x2000),
SHADER(A6XX_SP_CB_BINDLESS_DATA, 0x800),
SHADER(A6XX_SP_CB_LEGACY_DATA, 0x280),
SHADER(A6XX_SP_UAV_DATA, 0x80),
SHADER(A6XX_SP_INST_TAG, 0x80),

View File

@ -14,19 +14,6 @@
#define DPU_PERF_DEFAULT_MAX_CORE_CLK_RATE 412500000
/**
* enum dpu_core_perf_data_bus_id - data bus identifier
* @DPU_CORE_PERF_DATA_BUS_ID_MNOC: DPU/MNOC data bus
* @DPU_CORE_PERF_DATA_BUS_ID_LLCC: MNOC/LLCC data bus
* @DPU_CORE_PERF_DATA_BUS_ID_EBI: LLCC/EBI data bus
*/
enum dpu_core_perf_data_bus_id {
DPU_CORE_PERF_DATA_BUS_ID_MNOC,
DPU_CORE_PERF_DATA_BUS_ID_LLCC,
DPU_CORE_PERF_DATA_BUS_ID_EBI,
DPU_CORE_PERF_DATA_BUS_ID_MAX,
};
/**
* struct dpu_core_perf_params - definition of performance parameters
* @max_per_pipe_ib: maximum instantaneous bandwidth request

View File

@ -708,7 +708,7 @@ static umode_t nct7802_temp_is_visible(struct kobject *kobj,
if (index >= 38 && index < 46 && !(reg & 0x01)) /* PECI 0 */
return 0;
if (index >= 0x46 && (!(reg & 0x02))) /* PECI 1 */
if (index >= 46 && !(reg & 0x02)) /* PECI 1 */
return 0;
return attr->mode;

View File

@ -556,15 +556,15 @@ static int set_qp_rss(struct mlx4_ib_dev *dev, struct mlx4_ib_rss *rss_ctx,
return (-EOPNOTSUPP);
}
if (ucmd->rx_hash_fields_mask & ~(MLX4_IB_RX_HASH_SRC_IPV4 |
MLX4_IB_RX_HASH_DST_IPV4 |
MLX4_IB_RX_HASH_SRC_IPV6 |
MLX4_IB_RX_HASH_DST_IPV6 |
MLX4_IB_RX_HASH_SRC_PORT_TCP |
MLX4_IB_RX_HASH_DST_PORT_TCP |
MLX4_IB_RX_HASH_SRC_PORT_UDP |
MLX4_IB_RX_HASH_DST_PORT_UDP |
MLX4_IB_RX_HASH_INNER)) {
if (ucmd->rx_hash_fields_mask & ~(u64)(MLX4_IB_RX_HASH_SRC_IPV4 |
MLX4_IB_RX_HASH_DST_IPV4 |
MLX4_IB_RX_HASH_SRC_IPV6 |
MLX4_IB_RX_HASH_DST_IPV6 |
MLX4_IB_RX_HASH_SRC_PORT_TCP |
MLX4_IB_RX_HASH_DST_PORT_TCP |
MLX4_IB_RX_HASH_SRC_PORT_UDP |
MLX4_IB_RX_HASH_DST_PORT_UDP |
MLX4_IB_RX_HASH_INNER)) {
pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n",
ucmd->rx_hash_fields_mask);
return (-EOPNOTSUPP);

View File

@ -82,6 +82,7 @@ struct bcm6345_l1_chip {
};
struct bcm6345_l1_cpu {
struct bcm6345_l1_chip *intc;
void __iomem *map_base;
unsigned int parent_irq;
u32 enable_cache[];
@ -115,17 +116,11 @@ static inline unsigned int cpu_for_irq(struct bcm6345_l1_chip *intc,
static void bcm6345_l1_irq_handle(struct irq_desc *desc)
{
struct bcm6345_l1_chip *intc = irq_desc_get_handler_data(desc);
struct bcm6345_l1_cpu *cpu;
struct bcm6345_l1_cpu *cpu = irq_desc_get_handler_data(desc);
struct bcm6345_l1_chip *intc = cpu->intc;
struct irq_chip *chip = irq_desc_get_chip(desc);
unsigned int idx;
#ifdef CONFIG_SMP
cpu = intc->cpus[cpu_logical_map(smp_processor_id())];
#else
cpu = intc->cpus[0];
#endif
chained_irq_enter(chip, desc);
for (idx = 0; idx < intc->n_words; idx++) {
@ -257,6 +252,7 @@ static int __init bcm6345_l1_init_one(struct device_node *dn,
if (!cpu)
return -ENOMEM;
cpu->intc = intc;
cpu->map_base = ioremap(res.start, sz);
if (!cpu->map_base)
return -ENOMEM;
@ -272,7 +268,7 @@ static int __init bcm6345_l1_init_one(struct device_node *dn,
return -EINVAL;
}
irq_set_chained_handler_and_data(cpu->parent_irq,
bcm6345_l1_irq_handle, intc);
bcm6345_l1_irq_handle, cpu);
return 0;
}

View File

@ -839,7 +839,7 @@ hfcpci_fill_fifo(struct bchannel *bch)
*z1t = cpu_to_le16(new_z1); /* now send data */
if (bch->tx_idx < bch->tx_skb->len)
return;
dev_kfree_skb(bch->tx_skb);
dev_kfree_skb_any(bch->tx_skb);
if (get_next_bframe(bch))
goto next_t_frame;
return;
@ -895,7 +895,7 @@ hfcpci_fill_fifo(struct bchannel *bch)
}
bz->za[new_f1].z1 = cpu_to_le16(new_z1); /* for next buffer */
bz->f1 = new_f1; /* next frame */
dev_kfree_skb(bch->tx_skb);
dev_kfree_skb_any(bch->tx_skb);
get_next_bframe(bch);
}
@ -1119,7 +1119,7 @@ tx_birq(struct bchannel *bch)
if (bch->tx_skb && bch->tx_idx < bch->tx_skb->len)
hfcpci_fill_fifo(bch);
else {
dev_kfree_skb(bch->tx_skb);
dev_kfree_skb_any(bch->tx_skb);
if (get_next_bframe(bch))
hfcpci_fill_fifo(bch);
}
@ -2272,7 +2272,7 @@ _hfcpci_softirq(struct device *dev, void *unused)
return 0;
if (hc->hw.int_m2 & HFCPCI_IRQ_ENABLE) {
spin_lock(&hc->lock);
spin_lock_irq(&hc->lock);
bch = Sel_BCS(hc, hc->hw.bswapped ? 2 : 1);
if (bch && bch->state == ISDN_P_B_RAW) { /* B1 rx&tx */
main_rec_hfcpci(bch);
@ -2283,7 +2283,7 @@ _hfcpci_softirq(struct device *dev, void *unused)
main_rec_hfcpci(bch);
tx_birq(bch);
}
spin_unlock(&hc->lock);
spin_unlock_irq(&hc->lock);
}
return 0;
}

View File

@ -49,7 +49,7 @@
*
* bch_bucket_alloc() allocates a single bucket from a specific cache.
*
* bch_bucket_alloc_set() allocates one or more buckets from different caches
* bch_bucket_alloc_set() allocates one bucket from different caches
* out of a cache set.
*
* free_some_buckets() drives all the processes described above. It's called
@ -488,34 +488,29 @@ void bch_bucket_free(struct cache_set *c, struct bkey *k)
}
int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
struct bkey *k, int n, bool wait)
struct bkey *k, bool wait)
{
int i;
struct cache *ca;
long b;
/* No allocation if CACHE_SET_IO_DISABLE bit is set */
if (unlikely(test_bit(CACHE_SET_IO_DISABLE, &c->flags)))
return -1;
lockdep_assert_held(&c->bucket_lock);
BUG_ON(!n || n > c->caches_loaded || n > MAX_CACHES_PER_SET);
bkey_init(k);
/* sort by free space/prio of oldest data in caches */
ca = c->cache_by_alloc[0];
b = bch_bucket_alloc(ca, reserve, wait);
if (b == -1)
goto err;
for (i = 0; i < n; i++) {
struct cache *ca = c->cache_by_alloc[i];
long b = bch_bucket_alloc(ca, reserve, wait);
k->ptr[0] = MAKE_PTR(ca->buckets[b].gen,
bucket_to_sector(c, b),
ca->sb.nr_this_dev);
if (b == -1)
goto err;
k->ptr[i] = MAKE_PTR(ca->buckets[b].gen,
bucket_to_sector(c, b),
ca->sb.nr_this_dev);
SET_KEY_PTRS(k, i + 1);
}
SET_KEY_PTRS(k, 1);
return 0;
err:
@ -525,12 +520,12 @@ int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
}
int bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
struct bkey *k, int n, bool wait)
struct bkey *k, bool wait)
{
int ret;
mutex_lock(&c->bucket_lock);
ret = __bch_bucket_alloc_set(c, reserve, k, n, wait);
ret = __bch_bucket_alloc_set(c, reserve, k, wait);
mutex_unlock(&c->bucket_lock);
return ret;
}
@ -638,7 +633,7 @@ bool bch_alloc_sectors(struct cache_set *c,
spin_unlock(&c->data_bucket_lock);
if (bch_bucket_alloc_set(c, watermark, &alloc.key, 1, wait))
if (bch_bucket_alloc_set(c, watermark, &alloc.key, wait))
return false;
spin_lock(&c->data_bucket_lock);

View File

@ -970,9 +970,9 @@ void bch_bucket_free(struct cache_set *c, struct bkey *k);
long bch_bucket_alloc(struct cache *ca, unsigned int reserve, bool wait);
int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
struct bkey *k, int n, bool wait);
struct bkey *k, bool wait);
int bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
struct bkey *k, int n, bool wait);
struct bkey *k, bool wait);
bool bch_alloc_sectors(struct cache_set *c, struct bkey *k,
unsigned int sectors, unsigned int write_point,
unsigned int write_prio, bool wait);

View File

@ -1137,11 +1137,13 @@ struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op,
struct btree *parent)
{
BKEY_PADDED(key) k;
struct btree *b = ERR_PTR(-EAGAIN);
struct btree *b;
mutex_lock(&c->bucket_lock);
retry:
if (__bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, 1, wait))
/* return ERR_PTR(-EAGAIN) when it fails */
b = ERR_PTR(-EAGAIN);
if (__bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, wait))
goto err;
bkey_put(c, &k.key);

View File

@ -428,7 +428,7 @@ static int __uuid_write(struct cache_set *c)
closure_init_stack(&cl);
lockdep_assert_held(&bch_register_lock);
if (bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, 1, true))
if (bch_bucket_alloc_set(c, RESERVE_BTREE, &k.key, true))
return 1;
SET_KEY_SIZE(&k.key, c->sb.bucket_size);

View File

@ -854,7 +854,13 @@ struct smq_policy {
struct background_tracker *bg_work;
bool migrations_allowed;
bool migrations_allowed:1;
/*
* If this is set the policy will try and clean the whole cache
* even if the device is not idle.
*/
bool cleaner:1;
};
/*----------------------------------------------------------------*/
@ -1133,7 +1139,7 @@ static bool clean_target_met(struct smq_policy *mq, bool idle)
* Cache entries may not be populated. So we cannot rely on the
* size of the clean queue.
*/
if (idle) {
if (idle || mq->cleaner) {
/*
* We'd like to clean everything.
*/
@ -1716,11 +1722,9 @@ static void calc_hotspot_params(sector_t origin_size,
*hotspot_block_size /= 2u;
}
static struct dm_cache_policy *__smq_create(dm_cblock_t cache_size,
sector_t origin_size,
sector_t cache_block_size,
bool mimic_mq,
bool migrations_allowed)
static struct dm_cache_policy *
__smq_create(dm_cblock_t cache_size, sector_t origin_size, sector_t cache_block_size,
bool mimic_mq, bool migrations_allowed, bool cleaner)
{
unsigned i;
unsigned nr_sentinels_per_queue = 2u * NR_CACHE_LEVELS;
@ -1807,6 +1811,7 @@ static struct dm_cache_policy *__smq_create(dm_cblock_t cache_size,
goto bad_btracker;
mq->migrations_allowed = migrations_allowed;
mq->cleaner = cleaner;
return &mq->policy;
@ -1830,21 +1835,24 @@ static struct dm_cache_policy *smq_create(dm_cblock_t cache_size,
sector_t origin_size,
sector_t cache_block_size)
{
return __smq_create(cache_size, origin_size, cache_block_size, false, true);
return __smq_create(cache_size, origin_size, cache_block_size,
false, true, false);
}
static struct dm_cache_policy *mq_create(dm_cblock_t cache_size,
sector_t origin_size,
sector_t cache_block_size)
{
return __smq_create(cache_size, origin_size, cache_block_size, true, true);
return __smq_create(cache_size, origin_size, cache_block_size,
true, true, false);
}
static struct dm_cache_policy *cleaner_create(dm_cblock_t cache_size,
sector_t origin_size,
sector_t cache_block_size)
{
return __smq_create(cache_size, origin_size, cache_block_size, false, false);
return __smq_create(cache_size, origin_size, cache_block_size,
false, false, true);
}
/*----------------------------------------------------------------*/

View File

@ -3284,15 +3284,19 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
/* Try to adjust the raid4/5/6 stripe cache size to the stripe size */
if (rs_is_raid456(rs)) {
r = rs_set_raid456_stripe_cache(rs);
if (r)
if (r) {
mddev_unlock(&rs->md);
goto bad_stripe_cache;
}
}
/* Now do an early reshape check */
if (test_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags)) {
r = rs_check_reshape(rs);
if (r)
if (r) {
mddev_unlock(&rs->md);
goto bad_check_reshape;
}
/* Restore new, ctr requested layout to perform check */
rs_config_restore(rs, &rs_layout);
@ -3301,6 +3305,7 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
r = rs->md.pers->check_reshape(&rs->md);
if (r) {
ti->error = "Reshape check failed";
mddev_unlock(&rs->md);
goto bad_check_reshape;
}
}

View File

@ -1177,7 +1177,6 @@ static int meson_nand_attach_chip(struct nand_chip *nand)
struct meson_nfc *nfc = nand_get_controller_data(nand);
struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand);
struct mtd_info *mtd = nand_to_mtd(nand);
int nsectors = mtd->writesize / 1024;
int ret;
if (!mtd->name) {
@ -1195,7 +1194,7 @@ static int meson_nand_attach_chip(struct nand_chip *nand)
nand->options |= NAND_NO_SUBPAGE_WRITE;
ret = nand_ecc_choose_conf(nand, nfc->data->ecc_caps,
mtd->oobsize - 2 * nsectors);
mtd->oobsize - 2);
if (ret) {
dev_err(nfc->dev, "failed to ECC init\n");
return -EINVAL;

View File

@ -174,17 +174,17 @@ static void elm_load_syndrome(struct elm_info *info,
switch (info->bch_type) {
case BCH8_ECC:
/* syndrome fragment 0 = ecc[9-12B] */
val = cpu_to_be32(*(u32 *) &ecc[9]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[9]);
elm_write_reg(info, offset, val);
/* syndrome fragment 1 = ecc[5-8B] */
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[5]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[5]);
elm_write_reg(info, offset, val);
/* syndrome fragment 2 = ecc[1-4B] */
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[1]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[1]);
elm_write_reg(info, offset, val);
/* syndrome fragment 3 = ecc[0B] */
@ -194,35 +194,35 @@ static void elm_load_syndrome(struct elm_info *info,
break;
case BCH4_ECC:
/* syndrome fragment 0 = ecc[20-52b] bits */
val = (cpu_to_be32(*(u32 *) &ecc[3]) >> 4) |
val = ((__force u32)cpu_to_be32(*(u32 *)&ecc[3]) >> 4) |
((ecc[2] & 0xf) << 28);
elm_write_reg(info, offset, val);
/* syndrome fragment 1 = ecc[0-20b] bits */
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[0]) >> 12;
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[0]) >> 12;
elm_write_reg(info, offset, val);
break;
case BCH16_ECC:
val = cpu_to_be32(*(u32 *) &ecc[22]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[22]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[18]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[18]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[14]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[14]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[10]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[10]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[6]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[6]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[2]);
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[2]);
elm_write_reg(info, offset, val);
offset += 4;
val = cpu_to_be32(*(u32 *) &ecc[0]) >> 16;
val = (__force u32)cpu_to_be32(*(u32 *)&ecc[0]) >> 16;
elm_write_reg(info, offset, val);
break;
default:

View File

@ -60,7 +60,7 @@ static int tc58cxgxsx_ecc_get_status(struct spinand_device *spinand,
{
struct nand_device *nand = spinand_to_nand(spinand);
u8 mbf = 0;
struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, &mbf);
struct spi_mem_op op = SPINAND_GET_FEATURE_OP(0x30, spinand->scratchbuf);
switch (status & STATUS_ECC_MASK) {
case STATUS_ECC_NO_BITFLIPS:
@ -79,7 +79,7 @@ static int tc58cxgxsx_ecc_get_status(struct spinand_device *spinand,
if (spi_mem_exec_op(spinand->spimem, &op))
return nand->eccreq.strength;
mbf >>= 4;
mbf = *(spinand->scratchbuf) >> 4;
if (WARN_ON(mbf > nand->eccreq.strength || !mbf))
return nand->eccreq.strength;

View File

@ -1153,6 +1153,11 @@ static void bond_setup_by_slave(struct net_device *bond_dev,
memcpy(bond_dev->broadcast, slave_dev->broadcast,
slave_dev->addr_len);
if (slave_dev->flags & IFF_POINTOPOINT) {
bond_dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
bond_dev->flags |= (IFF_POINTOPOINT | IFF_NOARP);
}
}
/* On bonding slaves other than the currently active slave, suppress

View File

@ -732,6 +732,8 @@ static int gs_can_close(struct net_device *netdev)
usb_kill_anchored_urbs(&dev->tx_submitted);
atomic_set(&dev->active_tx_urbs, 0);
dev->can.state = CAN_STATE_STOPPED;
/* reset the device */
rc = gs_cmd_reset(dev);
if (rc < 0)

View File

@ -1638,8 +1638,11 @@ static int atl1e_tso_csum(struct atl1e_adapter *adapter,
real_len = (((unsigned char *)ip_hdr(skb) - skb->data)
+ ntohs(ip_hdr(skb)->tot_len));
if (real_len < skb->len)
pskb_trim(skb, real_len);
if (real_len < skb->len) {
err = pskb_trim(skb, real_len);
if (err)
return err;
}
hdr_len = (skb_transport_offset(skb) + tcp_hdrlen(skb));
if (unlikely(skb->len == hdr_len)) {

View File

@ -1140,7 +1140,8 @@ static struct sk_buff *be_lancer_xmit_workarounds(struct be_adapter *adapter,
(lancer_chip(adapter) || BE3_chip(adapter) ||
skb_vlan_tag_present(skb)) && is_ipv4_pkt(skb)) {
ip = (struct iphdr *)ip_hdr(skb);
pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len));
if (unlikely(pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len))))
goto tx_drop;
}
/* If vlan tag is already inlined in the packet, skip HW VLAN

View File

@ -1755,7 +1755,7 @@ void i40e_dbg_pf_exit(struct i40e_pf *pf)
void i40e_dbg_init(void)
{
i40e_dbg_root = debugfs_create_dir(i40e_driver_name, NULL);
if (!i40e_dbg_root)
if (IS_ERR(i40e_dbg_root))
pr_info("init of debugfs failed\n");
}

View File

@ -8423,7 +8423,7 @@ static void ixgbe_atr(struct ixgbe_ring *ring,
struct ixgbe_adapter *adapter = q_vector->adapter;
if (unlikely(skb_tail_pointer(skb) < hdr.network +
VXLAN_HEADROOM))
vxlan_headroom(0)))
return;
/* verify the port is recognized as VXLAN */

View File

@ -121,7 +121,9 @@ static int mlx5e_ipsec_remove_trailer(struct sk_buff *skb, struct xfrm_state *x)
trailer_len = alen + plen + 2;
pskb_trim(skb, skb->len - trailer_len);
ret = pskb_trim(skb, skb->len - trailer_len);
if (unlikely(ret))
return ret;
if (skb->protocol == htons(ETH_P_IP)) {
ipv4hdr->tot_len = htons(ntohs(ipv4hdr->tot_len) - trailer_len);
ip_send_check(ipv4hdr);

View File

@ -423,11 +423,12 @@ int mlx5dr_cmd_create_reformat_ctx(struct mlx5_core_dev *mdev,
err = mlx5_cmd_exec(mdev, in, inlen, out, sizeof(out));
if (err)
return err;
goto err_free_in;
*reformat_id = MLX5_GET(alloc_packet_reformat_context_out, out, packet_reformat_id);
kvfree(in);
err_free_in:
kvfree(in);
return err;
}

View File

@ -1481,15 +1481,15 @@ static int temac_probe(struct platform_device *pdev)
}
/* Error handle returned DMA RX and TX interrupts */
if (lp->rx_irq < 0) {
if (lp->rx_irq != -EPROBE_DEFER)
dev_err(&pdev->dev, "could not get DMA RX irq\n");
return lp->rx_irq;
if (lp->rx_irq <= 0) {
rc = lp->rx_irq ?: -EINVAL;
return dev_err_probe(&pdev->dev, rc,
"could not get DMA RX irq\n");
}
if (lp->tx_irq < 0) {
if (lp->tx_irq != -EPROBE_DEFER)
dev_err(&pdev->dev, "could not get DMA TX irq\n");
return lp->tx_irq;
if (lp->tx_irq <= 0) {
rc = lp->tx_irq ?: -EINVAL;
return dev_err_probe(&pdev->dev, rc,
"could not get DMA TX irq\n");
}
if (temac_np) {

View File

@ -525,7 +525,7 @@ static int tap_open(struct inode *inode, struct file *file)
q->sock.state = SS_CONNECTED;
q->sock.file = file;
q->sock.ops = &tap_socket_ops;
sock_init_data_uid(&q->sock, &q->sk, inode->i_uid);
sock_init_data_uid(&q->sock, &q->sk, current_fsuid());
q->sk.sk_write_space = tap_sock_write_space;
q->sk.sk_destruct = tap_sock_destruct;
q->flags = IFF_VNET_HDR | IFF_NO_PI | IFF_TAP;

View File

@ -2129,6 +2129,15 @@ static void team_setup_by_port(struct net_device *dev,
dev->mtu = port_dev->mtu;
memcpy(dev->broadcast, port_dev->broadcast, port_dev->addr_len);
eth_hw_addr_inherit(dev, port_dev);
if (port_dev->flags & IFF_POINTOPOINT) {
dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
dev->flags |= (IFF_POINTOPOINT | IFF_NOARP);
} else if ((port_dev->flags & (IFF_BROADCAST | IFF_MULTICAST)) ==
(IFF_BROADCAST | IFF_MULTICAST)) {
dev->flags |= (IFF_BROADCAST | IFF_MULTICAST);
dev->flags &= ~(IFF_POINTOPOINT | IFF_NOARP);
}
}
static int team_dev_type_check_change(struct net_device *dev,

View File

@ -3534,7 +3534,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
tfile->socket.file = file;
tfile->socket.ops = &tun_socket_ops;
sock_init_data_uid(&tfile->socket, &tfile->sk, inode->i_uid);
sock_init_data_uid(&tfile->socket, &tfile->sk, current_fsuid());
tfile->sk.sk_write_space = tun_sock_write_space;
tfile->sk.sk_sndbuf = INT_MAX;

View File

@ -605,9 +605,23 @@ static const struct usb_device_id products[] = {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8005, /* A-300 */
ZAURUS_FAKE_INTERFACE,
.driver_info = 0,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8006, /* B-500/SL-5600 */
ZAURUS_MASTER_INTERFACE,
.driver_info = 0,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8006, /* B-500/SL-5600 */
ZAURUS_FAKE_INTERFACE,
.driver_info = 0,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
@ -615,6 +629,13 @@ static const struct usb_device_id products[] = {
.idProduct = 0x8007, /* C-700 */
ZAURUS_MASTER_INTERFACE,
.driver_info = 0,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8007, /* C-700 */
ZAURUS_FAKE_INTERFACE,
.driver_info = 0,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,

View File

@ -1763,6 +1763,10 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
} else if (!info->in || !info->out)
status = usbnet_get_endpoints (dev, udev);
else {
u8 ep_addrs[3] = {
info->in + USB_DIR_IN, info->out + USB_DIR_OUT, 0
};
dev->in = usb_rcvbulkpipe (xdev, info->in);
dev->out = usb_sndbulkpipe (xdev, info->out);
if (!(info->flags & FLAG_NO_SETINT))
@ -1772,6 +1776,8 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
else
status = 0;
if (status == 0 && !usb_check_bulk_endpoints(udev, ep_addrs))
status = -EINVAL;
}
if (status >= 0 && dev->status)
status = init_status (dev, udev);

View File

@ -289,9 +289,23 @@ static const struct usb_device_id products [] = {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8005, /* A-300 */
ZAURUS_FAKE_INTERFACE,
.driver_info = (unsigned long)&bogus_mdlm_info,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8006, /* B-500/SL-5600 */
ZAURUS_MASTER_INTERFACE,
.driver_info = ZAURUS_PXA_INFO,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8006, /* B-500/SL-5600 */
ZAURUS_FAKE_INTERFACE,
.driver_info = (unsigned long)&bogus_mdlm_info,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
@ -299,6 +313,13 @@ static const struct usb_device_id products [] = {
.idProduct = 0x8007, /* C-700 */
ZAURUS_MASTER_INTERFACE,
.driver_info = ZAURUS_PXA_INFO,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8007, /* C-700 */
ZAURUS_FAKE_INTERFACE,
.driver_info = (unsigned long)&bogus_mdlm_info,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
| USB_DEVICE_ID_MATCH_DEVICE,

View File

@ -3268,6 +3268,8 @@ static int virtnet_probe(struct virtio_device *vdev)
}
}
_virtnet_set_queues(vi, vi->curr_queue_pairs);
/* serialize netdev register + virtio_device_ready() with ndo_open() */
rtnl_lock();
@ -3288,8 +3290,6 @@ static int virtnet_probe(struct virtio_device *vdev)
goto free_unregister_netdev;
}
virtnet_set_queues(vi, vi->curr_queue_pairs);
/* Assume link up if device can't report link status,
otherwise get link status from config. */
netif_carrier_off(dev);

View File

@ -2550,7 +2550,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
}
ndst = &rt->dst;
skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM);
skb_tunnel_check_pmtu(skb, ndst, vxlan_headroom(flags & VXLAN_F_GPE));
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
@ -2590,7 +2590,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto out_unlock;
}
skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM);
skb_tunnel_check_pmtu(skb, ndst, vxlan_headroom((flags & VXLAN_F_GPE) | VXLAN_F_IPV6));
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip6_dst_hoplimit(ndst);
@ -2908,14 +2908,12 @@ static int vxlan_change_mtu(struct net_device *dev, int new_mtu)
struct vxlan_rdst *dst = &vxlan->default_dst;
struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
dst->remote_ifindex);
bool use_ipv6 = !!(vxlan->cfg.flags & VXLAN_F_IPV6);
/* This check is different than dev->max_mtu, because it looks at
* the lowerdev->mtu, rather than the static dev->max_mtu
*/
if (lowerdev) {
int max_mtu = lowerdev->mtu -
(use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);
int max_mtu = lowerdev->mtu - vxlan_headroom(vxlan->cfg.flags);
if (new_mtu > max_mtu)
return -EINVAL;
}
@ -3514,11 +3512,11 @@ static void vxlan_config_apply(struct net_device *dev,
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_rdst *dst = &vxlan->default_dst;
unsigned short needed_headroom = ETH_HLEN;
bool use_ipv6 = !!(conf->flags & VXLAN_F_IPV6);
int max_mtu = ETH_MAX_MTU;
u32 flags = conf->flags;
if (!changelink) {
if (conf->flags & VXLAN_F_GPE)
if (flags & VXLAN_F_GPE)
vxlan_raw_setup(dev);
else
vxlan_ether_setup(dev);
@ -3544,8 +3542,7 @@ static void vxlan_config_apply(struct net_device *dev,
dev->needed_tailroom = lowerdev->needed_tailroom;
max_mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM :
VXLAN_HEADROOM);
max_mtu = lowerdev->mtu - vxlan_headroom(flags);
if (max_mtu < ETH_MIN_MTU)
max_mtu = ETH_MIN_MTU;
@ -3556,10 +3553,9 @@ static void vxlan_config_apply(struct net_device *dev,
if (dev->mtu > max_mtu)
dev->mtu = max_mtu;
if (use_ipv6 || conf->flags & VXLAN_F_COLLECT_METADATA)
needed_headroom += VXLAN6_HEADROOM;
else
needed_headroom += VXLAN_HEADROOM;
if (flags & VXLAN_F_COLLECT_METADATA)
flags |= VXLAN_F_IPV6;
needed_headroom += vxlan_headroom(flags);
dev->needed_headroom = needed_headroom;
memcpy(&vxlan->cfg, conf, sizeof(*conf));

View File

@ -200,12 +200,39 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
link->clkpm_disable = blacklist ? 1 : 0;
}
static bool pcie_retrain_link(struct pcie_link_state *link)
static int pcie_wait_for_retrain(struct pci_dev *pdev)
{
struct pci_dev *parent = link->pdev;
unsigned long end_jiffies;
u16 reg16;
/* Wait for Link Training to be cleared by hardware */
end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
do {
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &reg16);
if (!(reg16 & PCI_EXP_LNKSTA_LT))
return 0;
msleep(1);
} while (time_before(jiffies, end_jiffies));
return -ETIMEDOUT;
}
static int pcie_retrain_link(struct pcie_link_state *link)
{
struct pci_dev *parent = link->pdev;
int rc;
u16 reg16;
/*
* Ensure the updated LNKCTL parameters are used during link
* training by checking that there is no ongoing link training to
* avoid LTSSM race as recommended in Implementation Note at the
* end of PCIe r6.0.1 sec 7.5.3.7.
*/
rc = pcie_wait_for_retrain(parent);
if (rc)
return rc;
pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
reg16 |= PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
@ -219,15 +246,7 @@ static bool pcie_retrain_link(struct pcie_link_state *link)
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
}
/* Wait for link training end. Break out after waiting for timeout */
end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
do {
pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
if (!(reg16 & PCI_EXP_LNKSTA_LT))
break;
msleep(1);
} while (time_before(jiffies, end_jiffies));
return !(reg16 & PCI_EXP_LNKSTA_LT);
return pcie_wait_for_retrain(parent);
}
/*
@ -296,15 +315,15 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
reg16 &= ~PCI_EXP_LNKCTL_CCC;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
if (pcie_retrain_link(link))
return;
if (pcie_retrain_link(link)) {
/* Training failed. Restore common clock configurations */
pci_err(parent, "ASPM: Could not configure common clock\n");
list_for_each_entry(child, &linkbus->devices, bus_list)
pcie_capability_write_word(child, PCI_EXP_LNKCTL,
/* Training failed. Restore common clock configurations */
pci_err(parent, "ASPM: Could not configure common clock\n");
list_for_each_entry(child, &linkbus->devices, bus_list)
pcie_capability_write_word(child, PCI_EXP_LNKCTL,
child_reg[PCI_FUNC(child->devfn)]);
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg);
}
}
/* Convert L0s latency encoding to ns */

View File

@ -155,7 +155,7 @@ static int hisi_inno_phy_probe(struct platform_device *pdev)
phy_set_drvdata(phy, &priv->ports[i]);
i++;
if (i > INNO_PHY_PORT_NUM) {
if (i >= INNO_PHY_PORT_NUM) {
dev_warn(dev, "Support %d ports in maximum\n", i);
break;
}

View File

@ -210,7 +210,7 @@ static ssize_t set_device_state(const char *buf, size_t count, u8 mask)
return -EINVAL;
if (quirks->ec_read_only)
return -EOPNOTSUPP;
return 0;
/* read current device state */
result = ec_read(MSI_STANDARD_EC_COMMAND_ADDRESS, &rdata);
@ -841,15 +841,15 @@ static bool msi_laptop_i8042_filter(unsigned char data, unsigned char str,
static void msi_init_rfkill(struct work_struct *ignored)
{
if (rfk_wlan) {
rfkill_set_sw_state(rfk_wlan, !wlan_s);
msi_rfkill_set_state(rfk_wlan, !wlan_s);
rfkill_wlan_set(NULL, !wlan_s);
}
if (rfk_bluetooth) {
rfkill_set_sw_state(rfk_bluetooth, !bluetooth_s);
msi_rfkill_set_state(rfk_bluetooth, !bluetooth_s);
rfkill_bluetooth_set(NULL, !bluetooth_s);
}
if (rfk_threeg) {
rfkill_set_sw_state(rfk_threeg, !threeg_s);
msi_rfkill_set_state(rfk_threeg, !threeg_s);
rfkill_threeg_set(NULL, !threeg_s);
}
}

View File

@ -147,12 +147,13 @@ static int meson_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
return err;
}
return pwm_set_chip_data(pwm, channel);
return 0;
}
static void meson_pwm_free(struct pwm_chip *chip, struct pwm_device *pwm)
{
struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
struct meson_pwm *meson = to_meson_pwm(chip);
struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
if (channel)
clk_disable_unprepare(channel->clk);
@ -161,9 +162,10 @@ static void meson_pwm_free(struct pwm_chip *chip, struct pwm_device *pwm)
static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
const struct pwm_state *state)
{
struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
unsigned int duty, period, pre_div, cnt, duty_cnt;
unsigned long fin_freq = -1;
struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
unsigned int pre_div, cnt, duty_cnt;
unsigned long fin_freq;
u64 duty, period;
duty = state->duty_cycle;
period = state->period;
@ -185,19 +187,19 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
dev_dbg(meson->chip.dev, "fin_freq: %lu Hz\n", fin_freq);
pre_div = div64_u64(fin_freq * (u64)period, NSEC_PER_SEC * 0xffffLL);
pre_div = div64_u64(fin_freq * period, NSEC_PER_SEC * 0xffffLL);
if (pre_div > MISC_CLK_DIV_MASK) {
dev_err(meson->chip.dev, "unable to get period pre_div\n");
return -EINVAL;
}
cnt = div64_u64(fin_freq * (u64)period, NSEC_PER_SEC * (pre_div + 1));
cnt = div64_u64(fin_freq * period, NSEC_PER_SEC * (pre_div + 1));
if (cnt > 0xffff) {
dev_err(meson->chip.dev, "unable to get period cnt\n");
return -EINVAL;
}
dev_dbg(meson->chip.dev, "period=%u pre_div=%u cnt=%u\n", period,
dev_dbg(meson->chip.dev, "period=%llu pre_div=%u cnt=%u\n", period,
pre_div, cnt);
if (duty == period) {
@ -210,14 +212,13 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
channel->lo = cnt;
} else {
/* Then check is we can have the duty with the same pre_div */
duty_cnt = div64_u64(fin_freq * (u64)duty,
NSEC_PER_SEC * (pre_div + 1));
duty_cnt = div64_u64(fin_freq * duty, NSEC_PER_SEC * (pre_div + 1));
if (duty_cnt > 0xffff) {
dev_err(meson->chip.dev, "unable to get duty cycle\n");
return -EINVAL;
}
dev_dbg(meson->chip.dev, "duty=%u pre_div=%u duty_cnt=%u\n",
dev_dbg(meson->chip.dev, "duty=%llu pre_div=%u duty_cnt=%u\n",
duty, pre_div, duty_cnt);
channel->pre_div = pre_div;
@ -230,7 +231,7 @@ static int meson_pwm_calc(struct meson_pwm *meson, struct pwm_device *pwm,
static void meson_pwm_enable(struct meson_pwm *meson, struct pwm_device *pwm)
{
struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
struct meson_pwm_channel_data *channel_data;
unsigned long flags;
u32 value;
@ -273,8 +274,8 @@ static void meson_pwm_disable(struct meson_pwm *meson, struct pwm_device *pwm)
static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
const struct pwm_state *state)
{
struct meson_pwm_channel *channel = pwm_get_chip_data(pwm);
struct meson_pwm *meson = to_meson_pwm(chip);
struct meson_pwm_channel *channel = &meson->channels[pwm->hwpwm];
int err = 0;
if (!state)

View File

@ -137,6 +137,7 @@ static int dasd_ioctl_resume(struct dasd_block *block)
spin_unlock_irqrestore(get_ccwdev_lock(base->cdev), flags);
dasd_schedule_block_bh(block);
dasd_schedule_device_bh(base);
return 0;
}

View File

@ -534,8 +534,7 @@ static void zfcp_fc_adisc_handler(void *data)
/* re-init to undo drop from zfcp_fc_adisc() */
port->d_id = ntoh24(adisc_resp->adisc_port_id);
/* port is good, unblock rport without going through erp */
zfcp_scsi_schedule_rport_register(port);
/* port is still good, nothing to do */
out:
atomic_andnot(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
put_device(&port->dev);
@ -595,9 +594,6 @@ void zfcp_fc_link_test_work(struct work_struct *work)
int retval;
set_worker_desc("zadisc%16llx", port->wwpn); /* < WORKER_DESC_LEN=24 */
get_device(&port->dev);
port->rport_task = RPORT_DEL;
zfcp_scsi_rport_work(&port->rport_work);
/* only issue one test command at one time per port */
if (atomic_read(&port->status) & ZFCP_STATUS_PORT_LINK_TEST)

View File

@ -4831,7 +4831,8 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
}
INIT_DELAYED_WORK(&vha->scan.scan_work, qla_scan_work_fn);
sprintf(vha->host_str, "%s_%ld", QLA2XXX_DRIVER_NAME, vha->host_no);
snprintf(vha->host_str, sizeof(vha->host_str), "%s_%lu",
QLA2XXX_DRIVER_NAME, vha->host_no);
ql_dbg(ql_dbg_init, vha, 0x0041,
"Allocated the host=%p hw=%p vha=%p dev_name=%s",
vha->host, vha->hw, vha,
@ -4961,7 +4962,7 @@ qla2x00_uevent_emit(struct scsi_qla_host *vha, u32 code)
switch (code) {
case QLA_UEVENT_CODE_FW_DUMP:
snprintf(event_string, sizeof(event_string), "FW_DUMP=%ld",
snprintf(event_string, sizeof(event_string), "FW_DUMP=%lu",
vha->host_no);
break;
default:

View File

@ -1584,8 +1584,10 @@ static int ks_wlan_set_encode_ext(struct net_device *dev,
commit |= SME_WEP_FLAG;
}
if (enc->key_len) {
memcpy(&key->key_val[0], &enc->key[0], enc->key_len);
key->key_len = enc->key_len;
int key_len = clamp_val(enc->key_len, 0, IW_ENCODING_TOKEN_MAX);
memcpy(&key->key_val[0], &enc->key[0], key_len);
key->key_len = key_len;
commit |= (SME_WEP_VAL1 << index);
}
break;

View File

@ -80,7 +80,7 @@ static void dw8250_set_divisor(struct uart_port *p, unsigned int baud,
void dw8250_setup_port(struct uart_port *p)
{
struct uart_8250_port *up = up_to_u8250p(p);
u32 reg;
u32 reg, old_dlf;
/*
* If the Component Version Register returns zero, we know that
@ -93,9 +93,11 @@ void dw8250_setup_port(struct uart_port *p)
dev_dbg(p->dev, "Designware UART version %c.%c%c\n",
(reg >> 24) & 0xff, (reg >> 16) & 0xff, (reg >> 8) & 0xff);
/* Preserve value written by firmware or bootloader */
old_dlf = dw8250_readl_ext(p, DW_UART_DLF);
dw8250_writel_ext(p, DW_UART_DLF, ~0U);
reg = dw8250_readl_ext(p, DW_UART_DLF);
dw8250_writel_ext(p, DW_UART_DLF, 0);
dw8250_writel_ext(p, DW_UART_DLF, old_dlf);
if (reg) {
struct dw8250_port_data *d = p->private_data;

View File

@ -821,7 +821,7 @@ static void sifive_serial_console_write(struct console *co, const char *s,
local_irq_restore(flags);
}
static int __init sifive_serial_console_setup(struct console *co, char *options)
static int sifive_serial_console_setup(struct console *co, char *options)
{
struct sifive_serial_port *ssp;
int baud = SIFIVE_DEFAULT_BAUD_RATE;

View File

@ -437,6 +437,10 @@ static const struct usb_device_id usb_quirk_list[] = {
/* novation SoundControl XL */
{ USB_DEVICE(0x1235, 0x0061), .driver_info = USB_QUIRK_RESET_RESUME },
/* Focusrite Scarlett Solo USB */
{ USB_DEVICE(0x1235, 0x8211), .driver_info =
USB_QUIRK_DISCONNECT_SUSPEND },
/* Huawei 4G LTE module */
{ USB_DEVICE(0x12d1, 0x15bb), .driver_info =
USB_QUIRK_DISCONNECT_SUSPEND },

View File

@ -248,9 +248,9 @@ int dwc3_core_soft_reset(struct dwc3 *dwc)
/*
* We're resetting only the device side because, if we're in host mode,
* XHCI driver will reset the host block. If dwc3 was configured for
* host-only mode, then we can return early.
* host-only mode or current role is host, then we can return early.
*/
if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
if (dwc->dr_mode == USB_DR_MODE_HOST || dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
return 0;
reg = dwc3_readl(dwc->regs, DWC3_DCTL);
@ -1004,22 +1004,6 @@ static int dwc3_core_init(struct dwc3 *dwc)
dwc3_writel(dwc->regs, DWC3_GUCTL1, reg);
}
if (dwc->dr_mode == USB_DR_MODE_HOST ||
dwc->dr_mode == USB_DR_MODE_OTG) {
reg = dwc3_readl(dwc->regs, DWC3_GUCTL);
/*
* Enable Auto retry Feature to make the controller operating in
* Host mode on seeing transaction errors(CRC errors or internal
* overrun scenerios) on IN transfers to reply to the device
* with a non-terminating retry ACK (i.e, an ACK transcation
* packet with Retry=1 & Nump != 0)
*/
reg |= DWC3_GUCTL_HSTINAUTORETRY;
dwc3_writel(dwc->regs, DWC3_GUCTL, reg);
}
/*
* Must config both number of packets and max burst settings to enable
* RX and/or TX threshold.

View File

@ -247,9 +247,6 @@
#define DWC3_GCTL_GBLHIBERNATIONEN BIT(1)
#define DWC3_GCTL_DSBLCLKGTNG BIT(0)
/* Global User Control Register */
#define DWC3_GUCTL_HSTINAUTORETRY BIT(14)
/* Global User Control 1 Register */
#define DWC3_GUCTL1_PARKMODE_DISABLE_SS BIT(17)
#define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS BIT(28)

View File

@ -171,10 +171,12 @@ static int dwc3_pci_quirks(struct dwc3_pci *dwc)
/*
* A lot of BYT devices lack ACPI resource entries for
* the GPIOs, add a fallback mapping to the reference
* the GPIOs. If the ACPI entry for the GPIO controller
* is present add a fallback mapping to the reference
* design GPIOs which all boards seem to use.
*/
gpiod_add_lookup_table(&platform_bytcr_gpios);
if (acpi_dev_present("INT33FC", NULL, -1))
gpiod_add_lookup_table(&platform_bytcr_gpios);
/*
* These GPIOs will turn on the USB2 PHY. Note that we have to

View File

@ -645,7 +645,13 @@ ohci_hcd_at91_drv_resume(struct device *dev)
at91_start_clock(ohci_at91);
ohci_resume(hcd, false);
/*
* According to the comment in ohci_hcd_at91_drv_suspend()
* we need to do a reset if the 48Mhz clock was stopped,
* that is, if ohci_at91->wakeup is clear. Tell ohci_resume()
* to reset in this case by setting its "hibernated" flag.
*/
ohci_resume(hcd, !ohci_at91->wakeup);
ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0);

View File

@ -540,6 +540,7 @@ static int xhci_mtk_probe(struct platform_device *pdev)
}
device_init_wakeup(dev, true);
dma_set_max_seg_size(dev, UINT_MAX);
xhci = hcd_to_xhci(hcd);
xhci->main_hcd = hcd;

View File

@ -933,15 +933,15 @@ static int tegra_xusb_powerdomain_init(struct device *dev,
int err;
tegra->genpd_dev_host = dev_pm_domain_attach_by_name(dev, "xusb_host");
if (IS_ERR_OR_NULL(tegra->genpd_dev_host)) {
err = PTR_ERR(tegra->genpd_dev_host) ? : -ENODATA;
if (IS_ERR(tegra->genpd_dev_host)) {
err = PTR_ERR(tegra->genpd_dev_host);
dev_err(dev, "failed to get host pm-domain: %d\n", err);
return err;
}
tegra->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "xusb_ss");
if (IS_ERR_OR_NULL(tegra->genpd_dev_ss)) {
err = PTR_ERR(tegra->genpd_dev_ss) ? : -ENODATA;
if (IS_ERR(tegra->genpd_dev_ss)) {
err = PTR_ERR(tegra->genpd_dev_ss);
dev_err(dev, "failed to get superspeed pm-domain: %d\n", err);
return err;
}

View File

@ -251,6 +251,7 @@ static void option_instat_callback(struct urb *urb);
#define QUECTEL_PRODUCT_EM061K_LTA 0x0123
#define QUECTEL_PRODUCT_EM061K_LMS 0x0124
#define QUECTEL_PRODUCT_EC25 0x0125
#define QUECTEL_PRODUCT_EM060K_128 0x0128
#define QUECTEL_PRODUCT_EG91 0x0191
#define QUECTEL_PRODUCT_EG95 0x0195
#define QUECTEL_PRODUCT_BG96 0x0296
@ -268,6 +269,7 @@ static void option_instat_callback(struct urb *urb);
#define QUECTEL_PRODUCT_RM520N 0x0801
#define QUECTEL_PRODUCT_EC200U 0x0901
#define QUECTEL_PRODUCT_EC200S_CN 0x6002
#define QUECTEL_PRODUCT_EC200A 0x6005
#define QUECTEL_PRODUCT_EM061K_LWW 0x6008
#define QUECTEL_PRODUCT_EM061K_LCN 0x6009
#define QUECTEL_PRODUCT_EC200T 0x6026
@ -1197,6 +1199,9 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x30) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0xff, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x30) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0x00, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) },
@ -1225,6 +1230,7 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0900, 0xff, 0, 0), /* RM500U-CN */
.driver_info = ZLP },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200A, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },

View File

@ -38,16 +38,6 @@ static struct usb_serial_driver vendor##_device = { \
{ USB_DEVICE(0x0a21, 0x8001) } /* MMT-7305WW */
DEVICE(carelink, CARELINK_IDS);
/* ZIO Motherboard USB driver */
#define ZIO_IDS() \
{ USB_DEVICE(0x1CBE, 0x0103) }
DEVICE(zio, ZIO_IDS);
/* Funsoft Serial USB driver */
#define FUNSOFT_IDS() \
{ USB_DEVICE(0x1404, 0xcddc) }
DEVICE(funsoft, FUNSOFT_IDS);
/* Infineon Flashloader driver */
#define FLASHLOADER_IDS() \
{ USB_DEVICE_INTERFACE_CLASS(0x058b, 0x0041, USB_CLASS_CDC_DATA) }, \
@ -55,6 +45,11 @@ DEVICE(funsoft, FUNSOFT_IDS);
{ USB_DEVICE(0x8087, 0x0801) }
DEVICE(flashloader, FLASHLOADER_IDS);
/* Funsoft Serial USB driver */
#define FUNSOFT_IDS() \
{ USB_DEVICE(0x1404, 0xcddc) }
DEVICE(funsoft, FUNSOFT_IDS);
/* Google Serial USB SubClass */
#define GOOGLE_IDS() \
{ USB_VENDOR_AND_INTERFACE_INFO(0x18d1, \
@ -63,16 +58,21 @@ DEVICE(flashloader, FLASHLOADER_IDS);
0x01) }
DEVICE(google, GOOGLE_IDS);
/* HP4x (48/49) Generic Serial driver */
#define HP4X_IDS() \
{ USB_DEVICE(0x03f0, 0x0121) }
DEVICE(hp4x, HP4X_IDS);
/* KAUFMANN RKS+CAN VCP */
#define KAUFMANN_IDS() \
{ USB_DEVICE(0x16d0, 0x0870) }
DEVICE(kaufmann, KAUFMANN_IDS);
/* Libtransistor USB console */
#define LIBTRANSISTOR_IDS() \
{ USB_DEVICE(0x1209, 0x8b00) }
DEVICE(libtransistor, LIBTRANSISTOR_IDS);
/* ViVOpay USB Serial Driver */
#define VIVOPAY_IDS() \
{ USB_DEVICE(0x1d5f, 0x1004) } /* ViVOpay 8800 */
DEVICE(vivopay, VIVOPAY_IDS);
/* Motorola USB Phone driver */
#define MOTO_IDS() \
{ USB_DEVICE(0x05c6, 0x3197) }, /* unknown Motorola phone */ \
@ -101,10 +101,10 @@ DEVICE(nokia, NOKIA_IDS);
{ USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */
DEVICE_N(novatel_gps, NOVATEL_IDS, 3);
/* HP4x (48/49) Generic Serial driver */
#define HP4X_IDS() \
{ USB_DEVICE(0x03f0, 0x0121) }
DEVICE(hp4x, HP4X_IDS);
/* Siemens USB/MPI adapter */
#define SIEMENS_IDS() \
{ USB_DEVICE(0x908, 0x0004) }
DEVICE(siemens_mpi, SIEMENS_IDS);
/* Suunto ANT+ USB Driver */
#define SUUNTO_IDS() \
@ -112,45 +112,52 @@ DEVICE(hp4x, HP4X_IDS);
{ USB_DEVICE(0x0fcf, 0x1009) } /* Dynastream ANT USB-m Stick */
DEVICE(suunto, SUUNTO_IDS);
/* Siemens USB/MPI adapter */
#define SIEMENS_IDS() \
{ USB_DEVICE(0x908, 0x0004) }
DEVICE(siemens_mpi, SIEMENS_IDS);
/* ViVOpay USB Serial Driver */
#define VIVOPAY_IDS() \
{ USB_DEVICE(0x1d5f, 0x1004) } /* ViVOpay 8800 */
DEVICE(vivopay, VIVOPAY_IDS);
/* ZIO Motherboard USB driver */
#define ZIO_IDS() \
{ USB_DEVICE(0x1CBE, 0x0103) }
DEVICE(zio, ZIO_IDS);
/* All of the above structures mushed into two lists */
static struct usb_serial_driver * const serial_drivers[] = {
&carelink_device,
&zio_device,
&funsoft_device,
&flashloader_device,
&funsoft_device,
&google_device,
&hp4x_device,
&kaufmann_device,
&libtransistor_device,
&vivopay_device,
&moto_modem_device,
&motorola_tetra_device,
&nokia_device,
&novatel_gps_device,
&hp4x_device,
&suunto_device,
&siemens_mpi_device,
&suunto_device,
&vivopay_device,
&zio_device,
NULL
};
static const struct usb_device_id id_table[] = {
CARELINK_IDS(),
ZIO_IDS(),
FUNSOFT_IDS(),
FLASHLOADER_IDS(),
FUNSOFT_IDS(),
GOOGLE_IDS(),
HP4X_IDS(),
KAUFMANN_IDS(),
LIBTRANSISTOR_IDS(),
VIVOPAY_IDS(),
MOTO_IDS(),
MOTOROLA_TETRA_IDS(),
NOKIA_IDS(),
NOVATEL_IDS(),
HP4X_IDS(),
SUUNTO_IDS(),
SIEMENS_IDS(),
SUUNTO_IDS(),
VIVOPAY_IDS(),
ZIO_IDS(),
{ },
};
MODULE_DEVICE_TABLE(usb, id_table);

View File

@ -3589,6 +3589,8 @@ static noinline int split_node(struct btrfs_trans_handle *trans,
ret = tree_mod_log_eb_copy(split, c, 0, mid, c_nritems - mid);
if (ret) {
btrfs_tree_unlock(split);
free_extent_buffer(split);
btrfs_abort_transaction(trans, ret);
return ret;
}

View File

@ -4106,6 +4106,11 @@ void close_ctree(struct btrfs_fs_info *fs_info)
ASSERT(list_empty(&fs_info->delayed_iputs));
set_bit(BTRFS_FS_CLOSING_DONE, &fs_info->flags);
if (btrfs_check_quota_leak(fs_info)) {
WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
btrfs_err(fs_info, "qgroup reserved space leaked");
}
btrfs_free_qgroup_config(fs_info);
ASSERT(list_empty(&fs_info->delalloc_roots));

View File

@ -4917,7 +4917,9 @@ static long btrfs_ioctl_qgroup_assign(struct file *file, void __user *arg)
}
/* update qgroup status and info */
mutex_lock(&fs_info->qgroup_ioctl_lock);
err = btrfs_run_qgroups(trans);
mutex_unlock(&fs_info->qgroup_ioctl_lock);
if (err < 0)
btrfs_handle_fs_error(fs_info, err,
"failed to update qgroup status and info");

View File

@ -504,6 +504,49 @@ int btrfs_read_qgroup_config(struct btrfs_fs_info *fs_info)
return ret < 0 ? ret : 0;
}
static u64 btrfs_qgroup_subvolid(u64 qgroupid)
{
return (qgroupid & ((1ULL << BTRFS_QGROUP_LEVEL_SHIFT) - 1));
}
/*
* Called in close_ctree() when quota is still enabled. This verifies we don't
* leak some reserved space.
*
* Return false if no reserved space is left.
* Return true if some reserved space is leaked.
*/
bool btrfs_check_quota_leak(struct btrfs_fs_info *fs_info)
{
struct rb_node *node;
bool ret = false;
if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
return ret;
/*
* Since we're unmounting, there is no race and no need to grab qgroup
* lock. And here we don't go post-order to provide a more user
* friendly sorted result.
*/
for (node = rb_first(&fs_info->qgroup_tree); node; node = rb_next(node)) {
struct btrfs_qgroup *qgroup;
int i;
qgroup = rb_entry(node, struct btrfs_qgroup, node);
for (i = 0; i < BTRFS_QGROUP_RSV_LAST; i++) {
if (qgroup->rsv.values[i]) {
ret = true;
btrfs_warn(fs_info,
"qgroup %llu/%llu has unreleased space, type %d rsv %llu",
btrfs_qgroup_level(qgroup->qgroupid),
btrfs_qgroup_subvolid(qgroup->qgroupid),
i, qgroup->rsv.values[i]);
}
}
}
return ret;
}
/*
* This is called from close_ctree() or open_ctree() or btrfs_quota_disable(),
* first two are in single-threaded paths.And for the third one, we have set
@ -1121,12 +1164,23 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
int ret = 0;
/*
* We need to have subvol_sem write locked, to prevent races between
* concurrent tasks trying to disable quotas, because we will unlock
* and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes.
* We need to have subvol_sem write locked to prevent races with
* snapshot creation.
*/
lockdep_assert_held_write(&fs_info->subvol_sem);
/*
* Lock the cleaner mutex to prevent races with concurrent relocation,
* because relocation may be building backrefs for blocks of the quota
* root while we are deleting the root. This is like dropping fs roots
* of deleted snapshots/subvolumes, we need the same protection.
*
* This also prevents races between concurrent tasks trying to disable
* quotas, because we will unlock and relock qgroup_ioctl_lock across
* BTRFS_FS_QUOTA_ENABLED changes.
*/
mutex_lock(&fs_info->cleaner_mutex);
mutex_lock(&fs_info->qgroup_ioctl_lock);
if (!fs_info->quota_root)
goto out;
@ -1208,6 +1262,7 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
btrfs_end_transaction(trans);
else if (trans)
ret = btrfs_end_transaction(trans);
mutex_unlock(&fs_info->cleaner_mutex);
return ret;
}
@ -1340,7 +1395,6 @@ int btrfs_add_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
u64 dst)
{
struct btrfs_fs_info *fs_info = trans->fs_info;
struct btrfs_root *quota_root;
struct btrfs_qgroup *parent;
struct btrfs_qgroup *member;
struct btrfs_qgroup_list *list;
@ -1356,9 +1410,8 @@ int btrfs_add_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
return -ENOMEM;
mutex_lock(&fs_info->qgroup_ioctl_lock);
quota_root = fs_info->quota_root;
if (!quota_root) {
ret = -EINVAL;
if (!fs_info->quota_root) {
ret = -ENOTCONN;
goto out;
}
member = find_qgroup_rb(fs_info, src);
@ -1404,7 +1457,6 @@ static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
u64 dst)
{
struct btrfs_fs_info *fs_info = trans->fs_info;
struct btrfs_root *quota_root;
struct btrfs_qgroup *parent;
struct btrfs_qgroup *member;
struct btrfs_qgroup_list *list;
@ -1417,9 +1469,8 @@ static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
if (!tmp)
return -ENOMEM;
quota_root = fs_info->quota_root;
if (!quota_root) {
ret = -EINVAL;
if (!fs_info->quota_root) {
ret = -ENOTCONN;
goto out;
}
@ -1484,11 +1535,11 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
int ret = 0;
mutex_lock(&fs_info->qgroup_ioctl_lock);
quota_root = fs_info->quota_root;
if (!quota_root) {
ret = -EINVAL;
if (!fs_info->quota_root) {
ret = -ENOTCONN;
goto out;
}
quota_root = fs_info->quota_root;
qgroup = find_qgroup_rb(fs_info, qgroupid);
if (qgroup) {
ret = -EEXIST;
@ -1513,15 +1564,13 @@ int btrfs_create_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
{
struct btrfs_fs_info *fs_info = trans->fs_info;
struct btrfs_root *quota_root;
struct btrfs_qgroup *qgroup;
struct btrfs_qgroup_list *list;
int ret = 0;
mutex_lock(&fs_info->qgroup_ioctl_lock);
quota_root = fs_info->quota_root;
if (!quota_root) {
ret = -EINVAL;
if (!fs_info->quota_root) {
ret = -ENOTCONN;
goto out;
}
@ -1562,7 +1611,6 @@ int btrfs_limit_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid,
struct btrfs_qgroup_limit *limit)
{
struct btrfs_fs_info *fs_info = trans->fs_info;
struct btrfs_root *quota_root;
struct btrfs_qgroup *qgroup;
int ret = 0;
/* Sometimes we would want to clear the limit on this qgroup.
@ -1572,9 +1620,8 @@ int btrfs_limit_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid,
const u64 CLEAR_VALUE = -1;
mutex_lock(&fs_info->qgroup_ioctl_lock);
quota_root = fs_info->quota_root;
if (!quota_root) {
ret = -EINVAL;
if (!fs_info->quota_root) {
ret = -ENOTCONN;
goto out;
}
@ -2674,15 +2721,23 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
}
/*
* called from commit_transaction. Writes all changed qgroups to disk.
* Writes all changed qgroups to disk.
* Called by the transaction commit path and the qgroup assign ioctl.
*/
int btrfs_run_qgroups(struct btrfs_trans_handle *trans)
{
struct btrfs_fs_info *fs_info = trans->fs_info;
struct btrfs_root *quota_root = fs_info->quota_root;
int ret = 0;
if (!quota_root)
/*
* In case we are called from the qgroup assign ioctl, assert that we
* are holding the qgroup_ioctl_lock, otherwise we can race with a quota
* disable operation (ioctl) and access a freed quota root.
*/
if (trans->transaction->state != TRANS_STATE_COMMIT_DOING)
lockdep_assert_held(&fs_info->qgroup_ioctl_lock);
if (!fs_info->quota_root)
return ret;
spin_lock(&fs_info->qgroup_lock);
@ -2945,7 +3000,6 @@ static bool qgroup_check_limits(const struct btrfs_qgroup *qg, u64 num_bytes)
static int qgroup_reserve(struct btrfs_root *root, u64 num_bytes, bool enforce,
enum btrfs_qgroup_rsv_type type)
{
struct btrfs_root *quota_root;
struct btrfs_qgroup *qgroup;
struct btrfs_fs_info *fs_info = root->fs_info;
u64 ref_root = root->root_key.objectid;
@ -2964,8 +3018,7 @@ static int qgroup_reserve(struct btrfs_root *root, u64 num_bytes, bool enforce,
enforce = false;
spin_lock(&fs_info->qgroup_lock);
quota_root = fs_info->quota_root;
if (!quota_root)
if (!fs_info->quota_root)
goto out;
qgroup = find_qgroup_rb(fs_info, ref_root);
@ -3032,7 +3085,6 @@ void btrfs_qgroup_free_refroot(struct btrfs_fs_info *fs_info,
u64 ref_root, u64 num_bytes,
enum btrfs_qgroup_rsv_type type)
{
struct btrfs_root *quota_root;
struct btrfs_qgroup *qgroup;
struct ulist_node *unode;
struct ulist_iterator uiter;
@ -3050,8 +3102,7 @@ void btrfs_qgroup_free_refroot(struct btrfs_fs_info *fs_info,
}
spin_lock(&fs_info->qgroup_lock);
quota_root = fs_info->quota_root;
if (!quota_root)
if (!fs_info->quota_root)
goto out;
qgroup = find_qgroup_rb(fs_info, ref_root);
@ -3942,7 +3993,6 @@ void __btrfs_qgroup_free_meta(struct btrfs_root *root, int num_bytes,
static void qgroup_convert_meta(struct btrfs_fs_info *fs_info, u64 ref_root,
int num_bytes)
{
struct btrfs_root *quota_root = fs_info->quota_root;
struct btrfs_qgroup *qgroup;
struct ulist_node *unode;
struct ulist_iterator uiter;
@ -3950,7 +4000,7 @@ static void qgroup_convert_meta(struct btrfs_fs_info *fs_info, u64 ref_root,
if (num_bytes == 0)
return;
if (!quota_root)
if (!fs_info->quota_root)
return;
spin_lock(&fs_info->qgroup_lock);

View File

@ -416,5 +416,6 @@ int btrfs_qgroup_add_swapped_blocks(struct btrfs_trans_handle *trans,
int btrfs_qgroup_trace_subtree_after_cow(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct extent_buffer *eb);
void btrfs_qgroup_destroy_extent_records(struct btrfs_transaction *trans);
bool btrfs_check_quota_leak(struct btrfs_fs_info *fs_info);
#endif

View File

@ -706,8 +706,13 @@ btrfs_attach_transaction_barrier(struct btrfs_root *root)
trans = start_transaction(root, 0, TRANS_ATTACH,
BTRFS_RESERVE_NO_FLUSH, true);
if (trans == ERR_PTR(-ENOENT))
btrfs_wait_for_commit(root->fs_info, 0);
if (trans == ERR_PTR(-ENOENT)) {
int ret;
ret = btrfs_wait_for_commit(root->fs_info, 0);
if (ret)
return ERR_PTR(ret);
}
return trans;
}
@ -771,6 +776,7 @@ int btrfs_wait_for_commit(struct btrfs_fs_info *fs_info, u64 transid)
}
wait_for_commit(cur_trans);
ret = cur_trans->aborted;
btrfs_put_transaction(cur_trans);
out:
return ret;

View File

@ -2790,7 +2790,19 @@ int ceph_get_caps(struct file *filp, int need, int want,
if (ret == -EAGAIN)
continue;
if (!ret) {
struct ceph_mds_client *mdsc = fsc->mdsc;
struct cap_wait cw;
DEFINE_WAIT_FUNC(wait, woken_wake_function);
cw.ino = inode->i_ino;
cw.tgid = current->tgid;
cw.need = need;
cw.want = want;
spin_lock(&mdsc->caps_list_lock);
list_add(&cw.list, &mdsc->cap_wait_list);
spin_unlock(&mdsc->caps_list_lock);
add_wait_queue(&ci->i_cap_wq, &wait);
flags |= NON_BLOCKING;
@ -2804,6 +2816,11 @@ int ceph_get_caps(struct file *filp, int need, int want,
}
remove_wait_queue(&ci->i_cap_wq, &wait);
spin_lock(&mdsc->caps_list_lock);
list_del(&cw.list);
spin_unlock(&mdsc->caps_list_lock);
if (ret == -EAGAIN)
continue;
}

View File

@ -139,6 +139,7 @@ static int caps_show(struct seq_file *s, void *p)
struct ceph_fs_client *fsc = s->private;
struct ceph_mds_client *mdsc = fsc->mdsc;
int total, avail, used, reserved, min, i;
struct cap_wait *cw;
ceph_reservation_status(fsc, &total, &avail, &used, &reserved, &min);
seq_printf(s, "total\t\t%d\n"
@ -166,6 +167,18 @@ static int caps_show(struct seq_file *s, void *p)
}
mutex_unlock(&mdsc->mutex);
seq_printf(s, "\n\nWaiters:\n--------\n");
seq_printf(s, "tgid ino need want\n");
seq_printf(s, "-----------------------------------------------------\n");
spin_lock(&mdsc->caps_list_lock);
list_for_each_entry(cw, &mdsc->cap_wait_list, list) {
seq_printf(s, "%-13d0x%-17lx%-17s%-17s\n", cw->tgid, cw->ino,
ceph_cap_string(cw->need),
ceph_cap_string(cw->want));
}
spin_unlock(&mdsc->caps_list_lock);
return 0;
}

View File

@ -4074,7 +4074,7 @@ static void delayed_work(struct work_struct *work)
dout("mdsc delayed_work\n");
if (mdsc->stopping)
if (mdsc->stopping >= CEPH_MDSC_STOPPING_FLUSHED)
return;
mutex_lock(&mdsc->mutex);
@ -4174,6 +4174,7 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc)
INIT_DELAYED_WORK(&mdsc->delayed_work, delayed_work);
mdsc->last_renew_caps = jiffies;
INIT_LIST_HEAD(&mdsc->cap_delay_list);
INIT_LIST_HEAD(&mdsc->cap_wait_list);
spin_lock_init(&mdsc->cap_delay_lock);
INIT_LIST_HEAD(&mdsc->snap_flush_list);
spin_lock_init(&mdsc->snap_flush_lock);
@ -4245,7 +4246,7 @@ static void wait_requests(struct ceph_mds_client *mdsc)
void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc)
{
dout("pre_umount\n");
mdsc->stopping = 1;
mdsc->stopping = CEPH_MDSC_STOPPING_BEGIN;
lock_unlock_sessions(mdsc);
ceph_flush_dirty_caps(mdsc);

View File

@ -340,6 +340,19 @@ struct ceph_quotarealm_inode {
struct inode *inode;
};
struct cap_wait {
struct list_head list;
unsigned long ino;
pid_t tgid;
int need;
int want;
};
enum {
CEPH_MDSC_STOPPING_BEGIN = 1,
CEPH_MDSC_STOPPING_FLUSHED = 2,
};
/*
* mds client state
*/
@ -416,6 +429,7 @@ struct ceph_mds_client {
spinlock_t caps_list_lock;
struct list_head caps_list; /* unused (reserved or
unreserved) */
struct list_head cap_wait_list;
int caps_total_count; /* total caps allocated */
int caps_use_count; /* in use */
int caps_use_max; /* max used caps */

View File

@ -1174,14 +1174,23 @@ static struct dentry *ceph_mount(struct file_system_type *fs_type,
static void ceph_kill_sb(struct super_block *s)
{
struct ceph_fs_client *fsc = ceph_sb_to_client(s);
dev_t dev = s->s_dev;
dout("kill_sb %p\n", s);
ceph_mdsc_pre_umount(fsc->mdsc);
flush_fs_workqueues(fsc);
generic_shutdown_super(s);
/*
* Though the kill_anon_super() will finally trigger the
* sync_filesystem() anyway, we still need to do it here
* and then bump the stage of shutdown to stop the work
* queue as earlier as possible.
*/
sync_filesystem(s);
fsc->mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED;
kill_anon_super(s);
fsc->client->extra_mon_dispatch = NULL;
ceph_fs_debugfs_cleanup(fsc);
@ -1189,7 +1198,6 @@ static void ceph_kill_sb(struct super_block *s)
ceph_fscache_unregister_fs(fsc);
destroy_fs_client(fsc);
free_anon_bdev(dev);
}
static struct file_system_type ceph_fs_type = {

View File

@ -19,21 +19,21 @@ static struct list_head recv_list;
static wait_queue_head_t send_wq;
static wait_queue_head_t recv_wq;
struct plock_async_data {
void *fl;
void *file;
struct file_lock flc;
int (*callback)(struct file_lock *fl, int result);
};
struct plock_op {
struct list_head list;
int done;
struct dlm_plock_info info;
int (*callback)(struct file_lock *fl, int result);
/* if set indicates async handling */
struct plock_async_data *data;
};
struct plock_xop {
struct plock_op xop;
void *fl;
void *file;
struct file_lock flc;
};
static inline void set_version(struct dlm_plock_info *info)
{
info->version[0] = DLM_PLOCK_VERSION_MAJOR;
@ -58,6 +58,12 @@ static int check_version(struct dlm_plock_info *info)
return 0;
}
static void dlm_release_plock_op(struct plock_op *op)
{
kfree(op->data);
kfree(op);
}
static void send_op(struct plock_op *op)
{
set_version(&op->info);
@ -101,22 +107,21 @@ static void do_unlock_close(struct dlm_ls *ls, u64 number,
int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
int cmd, struct file_lock *fl)
{
struct plock_async_data *op_data;
struct dlm_ls *ls;
struct plock_op *op;
struct plock_xop *xop;
int rv;
ls = dlm_find_lockspace_local(lockspace);
if (!ls)
return -EINVAL;
xop = kzalloc(sizeof(*xop), GFP_NOFS);
if (!xop) {
op = kzalloc(sizeof(*op), GFP_NOFS);
if (!op) {
rv = -ENOMEM;
goto out;
}
op = &xop->xop;
op->info.optype = DLM_PLOCK_OP_LOCK;
op->info.pid = fl->fl_pid;
op->info.ex = (fl->fl_type == F_WRLCK);
@ -125,35 +130,44 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
op->info.number = number;
op->info.start = fl->fl_start;
op->info.end = fl->fl_end;
/* async handling */
if (fl->fl_lmops && fl->fl_lmops->lm_grant) {
op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
if (!op_data) {
dlm_release_plock_op(op);
rv = -ENOMEM;
goto out;
}
/* fl_owner is lockd which doesn't distinguish
processes on the nfs client */
op->info.owner = (__u64) fl->fl_pid;
op->callback = fl->fl_lmops->lm_grant;
locks_init_lock(&xop->flc);
locks_copy_lock(&xop->flc, fl);
xop->fl = fl;
xop->file = file;
op_data->callback = fl->fl_lmops->lm_grant;
locks_init_lock(&op_data->flc);
locks_copy_lock(&op_data->flc, fl);
op_data->fl = fl;
op_data->file = file;
op->data = op_data;
send_op(op);
rv = FILE_LOCK_DEFERRED;
goto out;
} else {
op->info.owner = (__u64)(long) fl->fl_owner;
}
send_op(op);
if (!op->callback) {
rv = wait_event_interruptible(recv_wq, (op->done != 0));
if (rv == -ERESTARTSYS) {
log_debug(ls, "dlm_posix_lock: wait killed %llx",
(unsigned long long)number);
spin_lock(&ops_lock);
list_del(&op->list);
spin_unlock(&ops_lock);
kfree(xop);
do_unlock_close(ls, number, file, fl);
goto out;
}
} else {
rv = FILE_LOCK_DEFERRED;
rv = wait_event_killable(recv_wq, (op->done != 0));
if (rv == -ERESTARTSYS) {
log_debug(ls, "%s: wait killed %llx", __func__,
(unsigned long long)number);
spin_lock(&ops_lock);
list_del(&op->list);
spin_unlock(&ops_lock);
dlm_release_plock_op(op);
do_unlock_close(ls, number, file, fl);
goto out;
}
@ -173,7 +187,7 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
(unsigned long long)number);
}
kfree(xop);
dlm_release_plock_op(op);
out:
dlm_put_lockspace(ls);
return rv;
@ -183,11 +197,11 @@ EXPORT_SYMBOL_GPL(dlm_posix_lock);
/* Returns failure iff a successful lock operation should be canceled */
static int dlm_plock_callback(struct plock_op *op)
{
struct plock_async_data *op_data = op->data;
struct file *file;
struct file_lock *fl;
struct file_lock *flc;
int (*notify)(struct file_lock *fl, int result) = NULL;
struct plock_xop *xop = (struct plock_xop *)op;
int rv = 0;
spin_lock(&ops_lock);
@ -199,10 +213,10 @@ static int dlm_plock_callback(struct plock_op *op)
spin_unlock(&ops_lock);
/* check if the following 2 are still valid or make a copy */
file = xop->file;
flc = &xop->flc;
fl = xop->fl;
notify = op->callback;
file = op_data->file;
flc = &op_data->flc;
fl = op_data->fl;
notify = op_data->callback;
if (op->info.rv) {
notify(fl, op->info.rv);
@ -233,7 +247,7 @@ static int dlm_plock_callback(struct plock_op *op)
}
out:
kfree(xop);
dlm_release_plock_op(op);
return rv;
}
@ -303,7 +317,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 number, struct file *file,
rv = 0;
out_free:
kfree(op);
dlm_release_plock_op(op);
out:
dlm_put_lockspace(ls);
fl->fl_flags = fl_flags;
@ -371,7 +385,7 @@ int dlm_posix_get(dlm_lockspace_t *lockspace, u64 number, struct file *file,
rv = 0;
}
kfree(op);
dlm_release_plock_op(op);
out:
dlm_put_lockspace(ls);
return rv;
@ -407,7 +421,7 @@ static ssize_t dev_read(struct file *file, char __user *u, size_t count,
(the process did not make an unlock call). */
if (op->info.flags & DLM_PLOCK_FL_CLOSE)
kfree(op);
dlm_release_plock_op(op);
if (copy_to_user(u, &info, sizeof(info)))
return -EFAULT;
@ -439,7 +453,7 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count,
op->info.owner == info.owner) {
list_del_init(&op->list);
memcpy(&op->info, &info, sizeof(info));
if (op->callback)
if (op->data)
do_callback = 1;
else
op->done = 1;

View File

@ -68,10 +68,7 @@ struct mb_cache;
* second extended-fs super-block data in memory
*/
struct ext2_sb_info {
unsigned long s_frag_size; /* Size of a fragment in bytes */
unsigned long s_frags_per_block;/* Number of fragments per block */
unsigned long s_inodes_per_block;/* Number of inodes per block */
unsigned long s_frags_per_group;/* Number of fragments in a group */
unsigned long s_blocks_per_group;/* Number of blocks in a group */
unsigned long s_inodes_per_group;/* Number of inodes in a group */
unsigned long s_itb_per_group; /* Number of inode table blocks per group */
@ -185,15 +182,6 @@ static inline struct ext2_sb_info *EXT2_SB(struct super_block *sb)
#define EXT2_INODE_SIZE(s) (EXT2_SB(s)->s_inode_size)
#define EXT2_FIRST_INO(s) (EXT2_SB(s)->s_first_ino)
/*
* Macro-instructions used to manage fragments
*/
#define EXT2_MIN_FRAG_SIZE 1024
#define EXT2_MAX_FRAG_SIZE 4096
#define EXT2_MIN_FRAG_LOG_SIZE 10
#define EXT2_FRAG_SIZE(s) (EXT2_SB(s)->s_frag_size)
#define EXT2_FRAGS_PER_BLOCK(s) (EXT2_SB(s)->s_frags_per_block)
/*
* Structure of a blocks group descriptor
*/

View File

@ -681,10 +681,9 @@ static int ext2_setup_super (struct super_block * sb,
es->s_max_mnt_count = cpu_to_le16(EXT2_DFL_MAX_MNT_COUNT);
le16_add_cpu(&es->s_mnt_count, 1);
if (test_opt (sb, DEBUG))
ext2_msg(sb, KERN_INFO, "%s, %s, bs=%lu, fs=%lu, gc=%lu, "
ext2_msg(sb, KERN_INFO, "%s, %s, bs=%lu, gc=%lu, "
"bpg=%lu, ipg=%lu, mo=%04lx]",
EXT2FS_VERSION, EXT2FS_DATE, sb->s_blocksize,
sbi->s_frag_size,
sbi->s_groups_count,
EXT2_BLOCKS_PER_GROUP(sb),
EXT2_INODES_PER_GROUP(sb),
@ -1031,14 +1030,7 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
}
}
sbi->s_frag_size = EXT2_MIN_FRAG_SIZE <<
le32_to_cpu(es->s_log_frag_size);
if (sbi->s_frag_size == 0)
goto cantfind_ext2;
sbi->s_frags_per_block = sb->s_blocksize / sbi->s_frag_size;
sbi->s_blocks_per_group = le32_to_cpu(es->s_blocks_per_group);
sbi->s_frags_per_group = le32_to_cpu(es->s_frags_per_group);
sbi->s_inodes_per_group = le32_to_cpu(es->s_inodes_per_group);
sbi->s_inodes_per_block = sb->s_blocksize / EXT2_INODE_SIZE(sb);
@ -1064,11 +1056,10 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
goto failed_mount;
}
if (sb->s_blocksize != sbi->s_frag_size) {
if (es->s_log_frag_size != es->s_log_block_size) {
ext2_msg(sb, KERN_ERR,
"error: fragsize %lu != blocksize %lu"
"(not supported yet)",
sbi->s_frag_size, sb->s_blocksize);
"error: fragsize log %u != blocksize log %u",
le32_to_cpu(es->s_log_frag_size), sb->s_blocksize_bits);
goto failed_mount;
}
@ -1078,12 +1069,6 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
sbi->s_blocks_per_group);
goto failed_mount;
}
if (sbi->s_frags_per_group > sb->s_blocksize * 8) {
ext2_msg(sb, KERN_ERR,
"error: #fragments per group too big: %lu",
sbi->s_frags_per_group);
goto failed_mount;
}
if (sbi->s_inodes_per_group < sbi->s_inodes_per_block ||
sbi->s_inodes_per_group > sb->s_blocksize * 8) {
ext2_msg(sb, KERN_ERR,

View File

@ -1429,7 +1429,7 @@ struct ext4_sb_info {
unsigned long s_commit_interval;
u32 s_max_batch_time;
u32 s_min_batch_time;
struct block_device *journal_bdev;
struct block_device *s_journal_bdev;
#ifdef CONFIG_QUOTA
/* Names of quota files with journalled quota */
char __rcu *s_qf_names[EXT4_MAXQUOTAS];

View File

@ -576,8 +576,8 @@ static bool ext4_getfsmap_is_valid_device(struct super_block *sb,
if (fm->fmr_device == 0 || fm->fmr_device == UINT_MAX ||
fm->fmr_device == new_encode_dev(sb->s_bdev->bd_dev))
return true;
if (EXT4_SB(sb)->journal_bdev &&
fm->fmr_device == new_encode_dev(EXT4_SB(sb)->journal_bdev->bd_dev))
if (EXT4_SB(sb)->s_journal_bdev &&
fm->fmr_device == new_encode_dev(EXT4_SB(sb)->s_journal_bdev->bd_dev))
return true;
return false;
}
@ -647,9 +647,9 @@ int ext4_getfsmap(struct super_block *sb, struct ext4_fsmap_head *head,
memset(handlers, 0, sizeof(handlers));
handlers[0].gfd_dev = new_encode_dev(sb->s_bdev->bd_dev);
handlers[0].gfd_fn = ext4_getfsmap_datadev;
if (EXT4_SB(sb)->journal_bdev) {
if (EXT4_SB(sb)->s_journal_bdev) {
handlers[1].gfd_dev = new_encode_dev(
EXT4_SB(sb)->journal_bdev->bd_dev);
EXT4_SB(sb)->s_journal_bdev->bd_dev);
handlers[1].gfd_fn = ext4_getfsmap_logdev;
}

View File

@ -573,6 +573,7 @@ static int ext4_shutdown(struct super_block *sb, unsigned long arg)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
__u32 flags;
struct super_block *ret;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
@ -591,7 +592,9 @@ static int ext4_shutdown(struct super_block *sb, unsigned long arg)
switch (flags) {
case EXT4_GOING_FLAGS_DEFAULT:
freeze_bdev(sb->s_bdev);
ret = freeze_bdev(sb->s_bdev);
if (IS_ERR(ret))
return PTR_ERR(ret);
set_bit(EXT4_FLAGS_SHUTDOWN, &sbi->s_ext4_flags);
thaw_bdev(sb->s_bdev, sb);
break;

View File

@ -906,10 +906,16 @@ static void ext4_blkdev_put(struct block_device *bdev)
static void ext4_blkdev_remove(struct ext4_sb_info *sbi)
{
struct block_device *bdev;
bdev = sbi->journal_bdev;
bdev = sbi->s_journal_bdev;
if (bdev) {
/*
* Invalidate the journal device's buffers. We don't want them
* floating about in memory - the physical journal device may
* hotswapped, and it breaks the `ro-after' testing code.
*/
invalidate_bdev(bdev);
ext4_blkdev_put(bdev);
sbi->journal_bdev = NULL;
sbi->s_journal_bdev = NULL;
}
}
@ -1034,14 +1040,8 @@ static void ext4_put_super(struct super_block *sb)
sync_blockdev(sb->s_bdev);
invalidate_bdev(sb->s_bdev);
if (sbi->journal_bdev && sbi->journal_bdev != sb->s_bdev) {
/*
* Invalidate the journal device's buffers. We don't want them
* floating about in memory - the physical journal device may
* hotswapped, and it breaks the `ro-after' testing code.
*/
sync_blockdev(sbi->journal_bdev);
invalidate_bdev(sbi->journal_bdev);
if (sbi->s_journal_bdev && sbi->s_journal_bdev != sb->s_bdev) {
sync_blockdev(sbi->s_journal_bdev);
ext4_blkdev_remove(sbi);
}
@ -3648,7 +3648,7 @@ int ext4_calculate_overhead(struct super_block *sb)
* Add the internal journal blocks whether the journal has been
* loaded or not
*/
if (sbi->s_journal && !sbi->journal_bdev)
if (sbi->s_journal && !sbi->s_journal_bdev)
overhead += EXT4_NUM_B2C(sbi, sbi->s_journal->j_maxlen);
else if (ext4_has_feature_journal(sb) && !sbi->s_journal && j_inum) {
/* j_inum for internal journal is non-zero */
@ -4833,6 +4833,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
ext4_blkdev_remove(sbi);
brelse(bh);
out_fail:
invalidate_bdev(sb->s_bdev);
sb->s_fs_info = NULL;
kfree(sbi->s_blockgroup_lock);
out_free_base:
@ -5008,7 +5009,7 @@ static journal_t *ext4_get_dev_journal(struct super_block *sb,
be32_to_cpu(journal->j_superblock->s_nr_users));
goto out_journal;
}
EXT4_SB(sb)->journal_bdev = bdev;
EXT4_SB(sb)->s_journal_bdev = bdev;
ext4_init_journal_params(sb, journal);
return journal;

View File

@ -57,28 +57,6 @@ static inline void __buffer_unlink(struct journal_head *jh)
}
}
/*
* Move a buffer from the checkpoint list to the checkpoint io list
*
* Called with j_list_lock held
*/
static inline void __buffer_relink_io(struct journal_head *jh)
{
transaction_t *transaction = jh->b_cp_transaction;
__buffer_unlink_first(jh);
if (!transaction->t_checkpoint_io_list) {
jh->b_cpnext = jh->b_cpprev = jh;
} else {
jh->b_cpnext = transaction->t_checkpoint_io_list;
jh->b_cpprev = transaction->t_checkpoint_io_list->b_cpprev;
jh->b_cpprev->b_cpnext = jh;
jh->b_cpnext->b_cpprev = jh;
}
transaction->t_checkpoint_io_list = jh;
}
/*
* Try to release a checkpointed buffer from its transaction.
* Returns 1 if we released it and 2 if we also released the
@ -91,8 +69,7 @@ static int __try_to_free_cp_buf(struct journal_head *jh)
int ret = 0;
struct buffer_head *bh = jh2bh(jh);
if (jh->b_transaction == NULL && !buffer_locked(bh) &&
!buffer_dirty(bh) && !buffer_write_io_error(bh)) {
if (!jh->b_transaction && !buffer_locked(bh) && !buffer_dirty(bh)) {
JBUFFER_TRACE(jh, "remove from checkpoint list");
ret = __jbd2_journal_remove_checkpoint(jh) + 1;
}
@ -191,6 +168,7 @@ __flush_batch(journal_t *journal, int *batch_count)
struct buffer_head *bh = journal->j_chkpt_bhs[i];
BUFFER_TRACE(bh, "brelse");
__brelse(bh);
journal->j_chkpt_bhs[i] = NULL;
}
*batch_count = 0;
}
@ -228,7 +206,6 @@ int jbd2_log_do_checkpoint(journal_t *journal)
* OK, we need to start writing disk blocks. Take one transaction
* and write it.
*/
result = 0;
spin_lock(&journal->j_list_lock);
if (!journal->j_checkpoint_transactions)
goto out;
@ -251,15 +228,6 @@ int jbd2_log_do_checkpoint(journal_t *journal)
jh = transaction->t_checkpoint_list;
bh = jh2bh(jh);
if (buffer_locked(bh)) {
get_bh(bh);
spin_unlock(&journal->j_list_lock);
wait_on_buffer(bh);
/* the journal_head may have gone by now */
BUFFER_TRACE(bh, "brelse");
__brelse(bh);
goto retry;
}
if (jh->b_transaction != NULL) {
transaction_t *t = jh->b_transaction;
tid_t tid = t->t_tid;
@ -294,32 +262,50 @@ int jbd2_log_do_checkpoint(journal_t *journal)
spin_lock(&journal->j_list_lock);
goto restart;
}
if (!buffer_dirty(bh)) {
if (unlikely(buffer_write_io_error(bh)) && !result)
result = -EIO;
if (!trylock_buffer(bh)) {
/*
* The buffer is locked, it may be writing back, or
* flushing out in the last couple of cycles, or
* re-adding into a new transaction, need to check
* it again until it's unlocked.
*/
get_bh(bh);
spin_unlock(&journal->j_list_lock);
wait_on_buffer(bh);
/* the journal_head may have gone by now */
BUFFER_TRACE(bh, "brelse");
__brelse(bh);
goto retry;
} else if (!buffer_dirty(bh)) {
unlock_buffer(bh);
BUFFER_TRACE(bh, "remove from checkpoint");
if (__jbd2_journal_remove_checkpoint(jh))
/* The transaction was released; we're done */
/*
* If the transaction was released or the checkpoint
* list was empty, we're done.
*/
if (__jbd2_journal_remove_checkpoint(jh) ||
!transaction->t_checkpoint_list)
goto out;
continue;
} else {
unlock_buffer(bh);
/*
* We are about to write the buffer, it could be
* raced by some other transaction shrink or buffer
* re-log logic once we release the j_list_lock,
* leave it on the checkpoint list and check status
* again to make sure it's clean.
*/
BUFFER_TRACE(bh, "queue");
get_bh(bh);
J_ASSERT_BH(bh, !buffer_jwrite(bh));
journal->j_chkpt_bhs[batch_count++] = bh;
transaction->t_chp_stats.cs_written++;
transaction->t_checkpoint_list = jh->b_cpnext;
}
/*
* Important: we are about to write the buffer, and
* possibly block, while still holding the journal
* lock. We cannot afford to let the transaction
* logic start messing around with this buffer before
* we write it to disk, as that would break
* recoverability.
*/
BUFFER_TRACE(bh, "queue");
get_bh(bh);
J_ASSERT_BH(bh, !buffer_jwrite(bh));
journal->j_chkpt_bhs[batch_count++] = bh;
__buffer_relink_io(jh);
transaction->t_chp_stats.cs_written++;
if ((batch_count == JBD2_NR_BATCH) ||
need_resched() ||
spin_needbreak(&journal->j_list_lock))
need_resched() || spin_needbreak(&journal->j_list_lock) ||
jh2bh(transaction->t_checkpoint_list) == journal->j_chkpt_bhs[0])
goto unlock_and_flush;
}
@ -333,46 +319,9 @@ int jbd2_log_do_checkpoint(journal_t *journal)
goto restart;
}
/*
* Now we issued all of the transaction's buffers, let's deal
* with the buffers that are out for I/O.
*/
restart2:
/* Did somebody clean up the transaction in the meanwhile? */
if (journal->j_checkpoint_transactions != transaction ||
transaction->t_tid != this_tid)
goto out;
while (transaction->t_checkpoint_io_list) {
jh = transaction->t_checkpoint_io_list;
bh = jh2bh(jh);
if (buffer_locked(bh)) {
get_bh(bh);
spin_unlock(&journal->j_list_lock);
wait_on_buffer(bh);
/* the journal_head may have gone by now */
BUFFER_TRACE(bh, "brelse");
__brelse(bh);
spin_lock(&journal->j_list_lock);
goto restart2;
}
if (unlikely(buffer_write_io_error(bh)) && !result)
result = -EIO;
/*
* Now in whatever state the buffer currently is, we
* know that it has been written out and so we can
* drop it from the list
*/
if (__jbd2_journal_remove_checkpoint(jh))
break;
}
out:
spin_unlock(&journal->j_list_lock);
if (result < 0)
jbd2_journal_abort(journal, result);
else
result = jbd2_cleanup_journal_tail(journal);
result = jbd2_cleanup_journal_tail(journal);
return (result < 0) ? result : 0;
}

View File

@ -562,12 +562,14 @@ static int __jbd2_journal_force_commit(journal_t *journal)
}
/**
* Force and wait upon a commit if the calling process is not within
* transaction. This is used for forcing out undo-protected data which contains
* bitmaps, when the fs is running out of space.
* jbd2_journal_force_commit_nested - Force and wait upon a commit if the
* calling process is not within transaction.
*
* @journal: journal to force
* Returns true if progress was made.
*
* This is used for forcing out undo-protected data which contains
* bitmaps, when the fs is running out of space.
*/
int jbd2_journal_force_commit_nested(journal_t *journal)
{
@ -578,7 +580,7 @@ int jbd2_journal_force_commit_nested(journal_t *journal)
}
/**
* int journal_force_commit() - force any uncommitted transactions
* jbd2_journal_force_commit() - force any uncommitted transactions
* @journal: journal to force
*
* Caller want unconditional commit. We can only force the running transaction
@ -1266,7 +1268,7 @@ journal_t *jbd2_journal_init_inode(struct inode *inode)
* superblock as being NULL to prevent the journal destroy from writing
* back a bogus superblock.
*/
static void journal_fail_superblock (journal_t *journal)
static void journal_fail_superblock(journal_t *journal)
{
struct buffer_head *bh = journal->j_sb_buffer;
brelse(bh);
@ -1634,7 +1636,7 @@ static int load_superblock(journal_t *journal)
/**
* int jbd2_journal_load() - Read journal from disk.
* jbd2_journal_load() - Read journal from disk.
* @journal: Journal to act on.
*
* Given a journal_t structure which tells us which disk blocks contain
@ -1704,7 +1706,7 @@ int jbd2_journal_load(journal_t *journal)
}
/**
* void jbd2_journal_destroy() - Release a journal_t structure.
* jbd2_journal_destroy() - Release a journal_t structure.
* @journal: Journal to act on.
*
* Release a journal_t structure once it is no longer in use by the
@ -1780,7 +1782,7 @@ int jbd2_journal_destroy(journal_t *journal)
/**
*int jbd2_journal_check_used_features () - Check if features specified are used.
* jbd2_journal_check_used_features() - Check if features specified are used.
* @journal: Journal to check.
* @compat: bitmask of compatible features
* @ro: bitmask of features that force read-only mount
@ -1790,7 +1792,7 @@ int jbd2_journal_destroy(journal_t *journal)
* features. Return true (non-zero) if it does.
**/
int jbd2_journal_check_used_features (journal_t *journal, unsigned long compat,
int jbd2_journal_check_used_features(journal_t *journal, unsigned long compat,
unsigned long ro, unsigned long incompat)
{
journal_superblock_t *sb;
@ -1815,7 +1817,7 @@ int jbd2_journal_check_used_features (journal_t *journal, unsigned long compat,
}
/**
* int jbd2_journal_check_available_features() - Check feature set in journalling layer
* jbd2_journal_check_available_features() - Check feature set in journalling layer
* @journal: Journal to check.
* @compat: bitmask of compatible features
* @ro: bitmask of features that force read-only mount
@ -1825,7 +1827,7 @@ int jbd2_journal_check_used_features (journal_t *journal, unsigned long compat,
* all of a given set of features on this journal. Return true
* (non-zero) if it can. */
int jbd2_journal_check_available_features (journal_t *journal, unsigned long compat,
int jbd2_journal_check_available_features(journal_t *journal, unsigned long compat,
unsigned long ro, unsigned long incompat)
{
if (!compat && !ro && !incompat)
@ -1847,7 +1849,7 @@ int jbd2_journal_check_available_features (journal_t *journal, unsigned long com
}
/**
* int jbd2_journal_set_features () - Mark a given journal feature in the superblock
* jbd2_journal_set_features() - Mark a given journal feature in the superblock
* @journal: Journal to act on.
* @compat: bitmask of compatible features
* @ro: bitmask of features that force read-only mount
@ -1858,7 +1860,7 @@ int jbd2_journal_check_available_features (journal_t *journal, unsigned long com
*
*/
int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
int jbd2_journal_set_features(journal_t *journal, unsigned long compat,
unsigned long ro, unsigned long incompat)
{
#define INCOMPAT_FEATURE_ON(f) \
@ -1929,7 +1931,7 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
}
/*
* jbd2_journal_clear_features () - Clear a given journal feature in the
* jbd2_journal_clear_features() - Clear a given journal feature in the
* superblock
* @journal: Journal to act on.
* @compat: bitmask of compatible features
@ -1956,7 +1958,7 @@ void jbd2_journal_clear_features(journal_t *journal, unsigned long compat,
EXPORT_SYMBOL(jbd2_journal_clear_features);
/**
* int jbd2_journal_flush () - Flush journal
* jbd2_journal_flush() - Flush journal
* @journal: Journal to act on.
*
* Flush all data for a given journal to disk and empty the journal.
@ -2031,7 +2033,7 @@ int jbd2_journal_flush(journal_t *journal)
}
/**
* int jbd2_journal_wipe() - Wipe journal contents
* jbd2_journal_wipe() - Wipe journal contents
* @journal: Journal to act on.
* @write: flag (see below)
*
@ -2072,7 +2074,7 @@ int jbd2_journal_wipe(journal_t *journal, int write)
}
/**
* void jbd2_journal_abort () - Shutdown the journal immediately.
* jbd2_journal_abort () - Shutdown the journal immediately.
* @journal: the journal to shutdown.
* @errno: an error number to record in the journal indicating
* the reason for the shutdown.
@ -2158,7 +2160,7 @@ void jbd2_journal_abort(journal_t *journal, int errno)
}
/**
* int jbd2_journal_errno () - returns the journal's error state.
* jbd2_journal_errno() - returns the journal's error state.
* @journal: journal to examine.
*
* This is the errno number set with jbd2_journal_abort(), the last
@ -2182,7 +2184,7 @@ int jbd2_journal_errno(journal_t *journal)
}
/**
* int jbd2_journal_clear_err () - clears the journal's error state
* jbd2_journal_clear_err() - clears the journal's error state
* @journal: journal to act on.
*
* An error must be cleared or acked to take a FS out of readonly
@ -2202,7 +2204,7 @@ int jbd2_journal_clear_err(journal_t *journal)
}
/**
* void jbd2_journal_ack_err() - Ack journal err.
* jbd2_journal_ack_err() - Ack journal err.
* @journal: journal to act on.
*
* An error must be cleared or acked to take a FS out of readonly

View File

@ -490,7 +490,7 @@ EXPORT_SYMBOL(jbd2__journal_start);
/**
* handle_t *jbd2_journal_start() - Obtain a new handle.
* jbd2_journal_start() - Obtain a new handle.
* @journal: Journal to start transaction on.
* @nblocks: number of block buffer we might modify
*
@ -525,7 +525,7 @@ void jbd2_journal_free_reserved(handle_t *handle)
EXPORT_SYMBOL(jbd2_journal_free_reserved);
/**
* int jbd2_journal_start_reserved() - start reserved handle
* jbd2_journal_start_reserved() - start reserved handle
* @handle: handle to start
* @type: for handle statistics
* @line_no: for handle statistics
@ -579,7 +579,7 @@ int jbd2_journal_start_reserved(handle_t *handle, unsigned int type,
EXPORT_SYMBOL(jbd2_journal_start_reserved);
/**
* int jbd2_journal_extend() - extend buffer credits.
* jbd2_journal_extend() - extend buffer credits.
* @handle: handle to 'extend'
* @nblocks: nr blocks to try to extend by.
*
@ -659,7 +659,7 @@ int jbd2_journal_extend(handle_t *handle, int nblocks)
/**
* int jbd2_journal_restart() - restart a handle .
* jbd2__journal_restart() - restart a handle .
* @handle: handle to restart
* @nblocks: nr credits requested
* @gfp_mask: memory allocation flags (for start_this_handle)
@ -736,7 +736,7 @@ int jbd2_journal_restart(handle_t *handle, int nblocks)
EXPORT_SYMBOL(jbd2_journal_restart);
/**
* void jbd2_journal_lock_updates () - establish a transaction barrier.
* jbd2_journal_lock_updates () - establish a transaction barrier.
* @journal: Journal to establish a barrier on.
*
* This locks out any further updates from being started, and blocks
@ -795,7 +795,7 @@ void jbd2_journal_lock_updates(journal_t *journal)
}
/**
* void jbd2_journal_unlock_updates (journal_t* journal) - release barrier
* jbd2_journal_unlock_updates () - release barrier
* @journal: Journal to release the barrier on.
*
* Release a transaction barrier obtained with jbd2_journal_lock_updates().
@ -1103,7 +1103,8 @@ static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh,
}
/**
* int jbd2_journal_get_write_access() - notify intent to modify a buffer for metadata (not data) update.
* jbd2_journal_get_write_access() - notify intent to modify a buffer
* for metadata (not data) update.
* @handle: transaction to add buffer modifications to
* @bh: bh to be used for metadata writes
*
@ -1147,7 +1148,7 @@ int jbd2_journal_get_write_access(handle_t *handle, struct buffer_head *bh)
* unlocked buffer beforehand. */
/**
* int jbd2_journal_get_create_access () - notify intent to use newly created bh
* jbd2_journal_get_create_access () - notify intent to use newly created bh
* @handle: transaction to new buffer to
* @bh: new buffer.
*
@ -1227,7 +1228,7 @@ int jbd2_journal_get_create_access(handle_t *handle, struct buffer_head *bh)
}
/**
* int jbd2_journal_get_undo_access() - Notify intent to modify metadata with
* jbd2_journal_get_undo_access() - Notify intent to modify metadata with
* non-rewindable consequences
* @handle: transaction
* @bh: buffer to undo
@ -1304,7 +1305,7 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh)
}
/**
* void jbd2_journal_set_triggers() - Add triggers for commit writeout
* jbd2_journal_set_triggers() - Add triggers for commit writeout
* @bh: buffer to trigger on
* @type: struct jbd2_buffer_trigger_type containing the trigger(s).
*
@ -1346,7 +1347,7 @@ void jbd2_buffer_abort_trigger(struct journal_head *jh,
}
/**
* int jbd2_journal_dirty_metadata() - mark a buffer as containing dirty metadata
* jbd2_journal_dirty_metadata() - mark a buffer as containing dirty metadata
* @handle: transaction to add buffer to.
* @bh: buffer to mark
*
@ -1524,7 +1525,7 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
}
/**
* void jbd2_journal_forget() - bforget() for potentially-journaled buffers.
* jbd2_journal_forget() - bforget() for potentially-journaled buffers.
* @handle: transaction handle
* @bh: bh to 'forget'
*
@ -1699,7 +1700,7 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
}
/**
* int jbd2_journal_stop() - complete a transaction
* jbd2_journal_stop() - complete a transaction
* @handle: transaction to complete.
*
* All done for a particular handle.
@ -2047,7 +2048,7 @@ __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh)
}
/**
* int jbd2_journal_try_to_free_buffers() - try to free page buffers.
* jbd2_journal_try_to_free_buffers() - try to free page buffers.
* @journal: journal for operation
* @page: to try and free
* @gfp_mask: we use the mask to detect how hard should we try to release
@ -2386,7 +2387,7 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
}
/**
* void jbd2_journal_invalidatepage()
* jbd2_journal_invalidatepage()
* @journal: journal to use for flush...
* @page: page to flush
* @offset: start of the range to invalidate

Some files were not shown because too many files have changed in this diff Show More