Merge branch 'akpm' (patches from Andrew)
Merge third patch-bomb from Andrew Morton: - more ocfs2 changes - a few hotfixes - Andy's compat cleanups - misc fixes to fatfs, ptrace, coredump, cpumask, creds, eventfd, panic, ipmi, kgdb, profile, kfifo, ubsan, etc. - many rapidio updates: fixes, new drivers. - kcov: kernel code coverage feature. Like gcov, but not "prohibitively expensive". - extable code consolidation for various archs * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (81 commits) ia64/extable: use generic search and sort routines x86/extable: use generic search and sort routines s390/extable: use generic search and sort routines alpha/extable: use generic search and sort routines kernel/...: convert pr_warning to pr_warn drivers: dma-coherent: use memset_io for DMA_MEMORY_IO mappings drivers: dma-coherent: use MEMREMAP_WC for DMA_MEMORY_MAP memremap: add MEMREMAP_WC flag memremap: don't modify flags kernel/signal.c: add compile-time check for __ARCH_SI_PREAMBLE_SIZE mm/mprotect.c: don't imply PROT_EXEC on non-exec fs ipc/sem: make semctl setting sempid consistent ubsan: fix tree-wide -Wmaybe-uninitialized false positives kfifo: fix sparse complaints scripts/gdb: account for changes in module data structure scripts/gdb: add cmdline reader command scripts/gdb: add version command kernel: add kcov code coverage profile: hide unused functions when !CONFIG_PROC_FS hpwdt: use nmi_panic() when kernel panics in NMI handler ...
This commit is contained in:
commit
a24e3d414e
94
Documentation/filesystems/ocfs2-online-filecheck.txt
Normal file
94
Documentation/filesystems/ocfs2-online-filecheck.txt
Normal file
@ -0,0 +1,94 @@
|
||||
OCFS2 online file check
|
||||
-----------------------
|
||||
|
||||
This document will describe OCFS2 online file check feature.
|
||||
|
||||
Introduction
|
||||
============
|
||||
OCFS2 is often used in high-availaibility systems. However, OCFS2 usually
|
||||
converts the filesystem to read-only when encounters an error. This may not be
|
||||
necessary, since turning the filesystem read-only would affect other running
|
||||
processes as well, decreasing availability.
|
||||
Then, a mount option (errors=continue) is introduced, which would return the
|
||||
-EIO errno to the calling process and terminate furhter processing so that the
|
||||
filesystem is not corrupted further. The filesystem is not converted to
|
||||
read-only, and the problematic file's inode number is reported in the kernel
|
||||
log. The user can try to check/fix this file via online filecheck feature.
|
||||
|
||||
Scope
|
||||
=====
|
||||
This effort is to check/fix small issues which may hinder day-to-day operations
|
||||
of a cluster filesystem by turning the filesystem read-only. The scope of
|
||||
checking/fixing is at the file level, initially for regular files and eventually
|
||||
to all files (including system files) of the filesystem.
|
||||
|
||||
In case of directory to file links is incorrect, the directory inode is
|
||||
reported as erroneous.
|
||||
|
||||
This feature is not suited for extravagant checks which involve dependency of
|
||||
other components of the filesystem, such as but not limited to, checking if the
|
||||
bits for file blocks in the allocation has been set. In case of such an error,
|
||||
the offline fsck should/would be recommended.
|
||||
|
||||
Finally, such an operation/feature should not be automated lest the filesystem
|
||||
may end up with more damage than before the repair attempt. So, this has to
|
||||
be performed using user interaction and consent.
|
||||
|
||||
User interface
|
||||
==============
|
||||
When there are errors in the OCFS2 filesystem, they are usually accompanied
|
||||
by the inode number which caused the error. This inode number would be the
|
||||
input to check/fix the file.
|
||||
|
||||
There is a sysfs directory for each OCFS2 file system mounting:
|
||||
|
||||
/sys/fs/ocfs2/<devname>/filecheck
|
||||
|
||||
Here, <devname> indicates the name of OCFS2 volumn device which has been already
|
||||
mounted. The file above would accept inode numbers. This could be used to
|
||||
communicate with kernel space, tell which file(inode number) will be checked or
|
||||
fixed. Currently, three operations are supported, which includes checking
|
||||
inode, fixing inode and setting the size of result record history.
|
||||
|
||||
1. If you want to know what error exactly happened to <inode> before fixing, do
|
||||
|
||||
# echo "<inode>" > /sys/fs/ocfs2/<devname>/filecheck/check
|
||||
# cat /sys/fs/ocfs2/<devname>/filecheck/check
|
||||
|
||||
The output is like this:
|
||||
INO DONE ERROR
|
||||
39502 1 GENERATION
|
||||
|
||||
<INO> lists the inode numbers.
|
||||
<DONE> indicates whether the operation has been finished.
|
||||
<ERROR> says what kind of errors was found. For the detailed error numbers,
|
||||
please refer to the file linux/fs/ocfs2/filecheck.h.
|
||||
|
||||
2. If you determine to fix this inode, do
|
||||
|
||||
# echo "<inode>" > /sys/fs/ocfs2/<devname>/filecheck/fix
|
||||
# cat /sys/fs/ocfs2/<devname>/filecheck/fix
|
||||
|
||||
The output is like this:
|
||||
INO DONE ERROR
|
||||
39502 1 SUCCESS
|
||||
|
||||
This time, the <ERROR> column indicates whether this fix is successful or not.
|
||||
|
||||
3. The record cache is used to store the history of check/fix results. It's
|
||||
defalut size is 10, and can be adjust between the range of 10 ~ 100. You can
|
||||
adjust the size like this:
|
||||
|
||||
# echo "<size>" > /sys/fs/ocfs2/<devname>/filecheck/set
|
||||
|
||||
Fixing stuff
|
||||
============
|
||||
On receivng the inode, the filesystem would read the inode and the
|
||||
file metadata. In case of errors, the filesystem would fix the errors
|
||||
and report the problems it fixed in the kernel log. As a precautionary measure,
|
||||
the inode must first be checked for errors before performing a final fix.
|
||||
|
||||
The inode and the result history will be maintained temporarily in a
|
||||
small linked list buffer which would contain the last (N) inodes
|
||||
fixed/checked, the detailed errors which were fixed/checked are printed in the
|
||||
kernel log.
|
@ -56,9 +56,10 @@ iocharset=<name> -- Character set to use for converting between the
|
||||
you should consider the following option instead.
|
||||
|
||||
utf8=<bool> -- UTF-8 is the filesystem safe version of Unicode that
|
||||
is used by the console. It can be enabled for the
|
||||
filesystem with this option. If 'uni_xlate' gets set,
|
||||
UTF-8 gets disabled.
|
||||
is used by the console. It can be enabled or disabled
|
||||
for the filesystem with this option.
|
||||
If 'uni_xlate' gets set, UTF-8 gets disabled.
|
||||
By default, FAT_DEFAULT_UTF8 setting is used.
|
||||
|
||||
uni_xlate=<bool> -- Translate unhandled Unicode characters to special
|
||||
escaped sequences. This would let you backup and
|
||||
|
111
Documentation/kcov.txt
Normal file
111
Documentation/kcov.txt
Normal file
@ -0,0 +1,111 @@
|
||||
kcov: code coverage for fuzzing
|
||||
===============================
|
||||
|
||||
kcov exposes kernel code coverage information in a form suitable for coverage-
|
||||
guided fuzzing (randomized testing). Coverage data of a running kernel is
|
||||
exported via the "kcov" debugfs file. Coverage collection is enabled on a task
|
||||
basis, and thus it can capture precise coverage of a single system call.
|
||||
|
||||
Note that kcov does not aim to collect as much coverage as possible. It aims
|
||||
to collect more or less stable coverage that is function of syscall inputs.
|
||||
To achieve this goal it does not collect coverage in soft/hard interrupts
|
||||
and instrumentation of some inherently non-deterministic parts of kernel is
|
||||
disbled (e.g. scheduler, locking).
|
||||
|
||||
Usage:
|
||||
======
|
||||
|
||||
Configure kernel with:
|
||||
|
||||
CONFIG_KCOV=y
|
||||
|
||||
CONFIG_KCOV requires gcc built on revision 231296 or later.
|
||||
Profiling data will only become accessible once debugfs has been mounted:
|
||||
|
||||
mount -t debugfs none /sys/kernel/debug
|
||||
|
||||
The following program demonstrates kcov usage from within a test program:
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
#include <sys/ioctl.h>
|
||||
#include <sys/mman.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
|
||||
#define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
|
||||
#define KCOV_ENABLE _IO('c', 100)
|
||||
#define KCOV_DISABLE _IO('c', 101)
|
||||
#define COVER_SIZE (64<<10)
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int fd;
|
||||
unsigned long *cover, n, i;
|
||||
|
||||
/* A single fd descriptor allows coverage collection on a single
|
||||
* thread.
|
||||
*/
|
||||
fd = open("/sys/kernel/debug/kcov", O_RDWR);
|
||||
if (fd == -1)
|
||||
perror("open"), exit(1);
|
||||
/* Setup trace mode and trace size. */
|
||||
if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
|
||||
perror("ioctl"), exit(1);
|
||||
/* Mmap buffer shared between kernel- and user-space. */
|
||||
cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
|
||||
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
|
||||
if ((void*)cover == MAP_FAILED)
|
||||
perror("mmap"), exit(1);
|
||||
/* Enable coverage collection on the current thread. */
|
||||
if (ioctl(fd, KCOV_ENABLE, 0))
|
||||
perror("ioctl"), exit(1);
|
||||
/* Reset coverage from the tail of the ioctl() call. */
|
||||
__atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
|
||||
/* That's the target syscal call. */
|
||||
read(-1, NULL, 0);
|
||||
/* Read number of PCs collected. */
|
||||
n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
|
||||
for (i = 0; i < n; i++)
|
||||
printf("0x%lx\n", cover[i + 1]);
|
||||
/* Disable coverage collection for the current thread. After this call
|
||||
* coverage can be enabled for a different thread.
|
||||
*/
|
||||
if (ioctl(fd, KCOV_DISABLE, 0))
|
||||
perror("ioctl"), exit(1);
|
||||
/* Free resources. */
|
||||
if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
|
||||
perror("munmap"), exit(1);
|
||||
if (close(fd))
|
||||
perror("close"), exit(1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
After piping through addr2line output of the program looks as follows:
|
||||
|
||||
SyS_read
|
||||
fs/read_write.c:562
|
||||
__fdget_pos
|
||||
fs/file.c:774
|
||||
__fget_light
|
||||
fs/file.c:746
|
||||
__fget_light
|
||||
fs/file.c:750
|
||||
__fget_light
|
||||
fs/file.c:760
|
||||
__fdget_pos
|
||||
fs/file.c:784
|
||||
SyS_read
|
||||
fs/read_write.c:562
|
||||
|
||||
If a program needs to collect coverage from several threads (independently),
|
||||
it needs to open /sys/kernel/debug/kcov in each thread separately.
|
||||
|
||||
The interface is fine-grained to allow efficient forking of test processes.
|
||||
That is, a parent process opens /sys/kernel/debug/kcov, enables trace mode,
|
||||
mmaps coverage buffer and then forks child processes in a loop. Child processes
|
||||
only need to enable coverage (disable happens automatically on thread end).
|
104
Documentation/rapidio/mport_cdev.txt
Normal file
104
Documentation/rapidio/mport_cdev.txt
Normal file
@ -0,0 +1,104 @@
|
||||
RapidIO subsystem mport character device driver (rio_mport_cdev.c)
|
||||
==================================================================
|
||||
|
||||
Version History:
|
||||
----------------
|
||||
1.0.0 - Initial driver release.
|
||||
|
||||
==================================================================
|
||||
|
||||
I. Overview
|
||||
|
||||
This device driver is the result of collaboration within the RapidIO.org
|
||||
Software Task Group (STG) between Texas Instruments, Freescale,
|
||||
Prodrive Technologies, Nokia Networks, BAE and IDT. Additional input was
|
||||
received from other members of RapidIO.org. The objective was to create a
|
||||
character mode driver interface which exposes the capabilities of RapidIO
|
||||
devices directly to applications, in a manner that allows the numerous and
|
||||
varied RapidIO implementations to interoperate.
|
||||
|
||||
This driver (MPORT_CDEV) provides access to basic RapidIO subsystem operations
|
||||
for user-space applications. Most of RapidIO operations are supported through
|
||||
'ioctl' system calls.
|
||||
|
||||
When loaded this device driver creates filesystem nodes named rio_mportX in /dev
|
||||
directory for each registered RapidIO mport device. 'X' in the node name matches
|
||||
to unique port ID assigned to each local mport device.
|
||||
|
||||
Using available set of ioctl commands user-space applications can perform
|
||||
following RapidIO bus and subsystem operations:
|
||||
|
||||
- Reads and writes from/to configuration registers of mport devices
|
||||
(RIO_MPORT_MAINT_READ_LOCAL/RIO_MPORT_MAINT_WRITE_LOCAL)
|
||||
- Reads and writes from/to configuration registers of remote RapidIO devices.
|
||||
This operations are defined as RapidIO Maintenance reads/writes in RIO spec.
|
||||
(RIO_MPORT_MAINT_READ_REMOTE/RIO_MPORT_MAINT_WRITE_REMOTE)
|
||||
- Set RapidIO Destination ID for mport devices (RIO_MPORT_MAINT_HDID_SET)
|
||||
- Set RapidIO Component Tag for mport devices (RIO_MPORT_MAINT_COMPTAG_SET)
|
||||
- Query logical index of mport devices (RIO_MPORT_MAINT_PORT_IDX_GET)
|
||||
- Query capabilities and RapidIO link configuration of mport devices
|
||||
(RIO_MPORT_GET_PROPERTIES)
|
||||
- Enable/Disable reporting of RapidIO doorbell events to user-space applications
|
||||
(RIO_ENABLE_DOORBELL_RANGE/RIO_DISABLE_DOORBELL_RANGE)
|
||||
- Enable/Disable reporting of RIO port-write events to user-space applications
|
||||
(RIO_ENABLE_PORTWRITE_RANGE/RIO_DISABLE_PORTWRITE_RANGE)
|
||||
- Query/Control type of events reported through this driver: doorbells,
|
||||
port-writes or both (RIO_SET_EVENT_MASK/RIO_GET_EVENT_MASK)
|
||||
- Configure/Map mport's outbound requests window(s) for specific size,
|
||||
RapidIO destination ID, hopcount and request type
|
||||
(RIO_MAP_OUTBOUND/RIO_UNMAP_OUTBOUND)
|
||||
- Configure/Map mport's inbound requests window(s) for specific size,
|
||||
RapidIO base address and local memory base address
|
||||
(RIO_MAP_INBOUND/RIO_UNMAP_INBOUND)
|
||||
- Allocate/Free contiguous DMA coherent memory buffer for DMA data transfers
|
||||
to/from remote RapidIO devices (RIO_ALLOC_DMA/RIO_FREE_DMA)
|
||||
- Initiate DMA data transfers to/from remote RapidIO devices (RIO_TRANSFER).
|
||||
Supports blocking, asynchronous and posted (a.k.a 'fire-and-forget') data
|
||||
transfer modes.
|
||||
- Check/Wait for completion of asynchronous DMA data transfer
|
||||
(RIO_WAIT_FOR_ASYNC)
|
||||
- Manage device objects supported by RapidIO subsystem (RIO_DEV_ADD/RIO_DEV_DEL).
|
||||
This allows implementation of various RapidIO fabric enumeration algorithms
|
||||
as user-space applications while using remaining functionality provided by
|
||||
kernel RapidIO subsystem.
|
||||
|
||||
II. Hardware Compatibility
|
||||
|
||||
This device driver uses standard interfaces defined by kernel RapidIO subsystem
|
||||
and therefore it can be used with any mport device driver registered by RapidIO
|
||||
subsystem with limitations set by available mport implementation.
|
||||
|
||||
At this moment the most common limitation is availability of RapidIO-specific
|
||||
DMA engine framework for specific mport device. Users should verify available
|
||||
functionality of their platform when planning to use this driver:
|
||||
|
||||
- IDT Tsi721 PCIe-to-RapidIO bridge device and its mport device driver are fully
|
||||
compatible with this driver.
|
||||
- Freescale SoCs 'fsl_rio' mport driver does not have implementation for RapidIO
|
||||
specific DMA engine support and therefore DMA data transfers mport_cdev driver
|
||||
are not available.
|
||||
|
||||
III. Module parameters
|
||||
|
||||
- 'dbg_level' - This parameter allows to control amount of debug information
|
||||
generated by this device driver. This parameter is formed by set of
|
||||
This parameter can be changed bit masks that correspond to the specific
|
||||
functional block.
|
||||
For mask definitions see 'drivers/rapidio/devices/rio_mport_cdev.c'
|
||||
This parameter can be changed dynamically.
|
||||
Use CONFIG_RAPIDIO_DEBUG=y to enable debug output at the top level.
|
||||
|
||||
IV. Known problems
|
||||
|
||||
None.
|
||||
|
||||
V. User-space Applications and API
|
||||
|
||||
API library and applications that use this device driver are available from
|
||||
RapidIO.org.
|
||||
|
||||
VI. TODO List
|
||||
|
||||
- Add support for sending/receiving "raw" RapidIO messaging packets.
|
||||
- Add memory mapped DMA data transfers as an option when RapidIO-specific DMA
|
||||
is not available.
|
@ -16,6 +16,15 @@ For inbound messages this driver uses destination ID matching to forward message
|
||||
into the corresponding message queue. Messaging callbacks are implemented to be
|
||||
fully compatible with RIONET driver (Ethernet over RapidIO messaging services).
|
||||
|
||||
1. Module parameters:
|
||||
- 'dbg_level' - This parameter allows to control amount of debug information
|
||||
generated by this device driver. This parameter is formed by set of
|
||||
This parameter can be changed bit masks that correspond to the specific
|
||||
functional block.
|
||||
For mask definitions see 'drivers/rapidio/devices/tsi721.h'
|
||||
This parameter can be changed dynamically.
|
||||
Use CONFIG_RAPIDIO_DEBUG=y to enable debug output at the top level.
|
||||
|
||||
II. Known problems
|
||||
|
||||
None.
|
||||
|
11
Makefile
11
Makefile
@ -365,6 +365,7 @@ LDFLAGS_MODULE =
|
||||
CFLAGS_KERNEL =
|
||||
AFLAGS_KERNEL =
|
||||
CFLAGS_GCOV = -fprofile-arcs -ftest-coverage
|
||||
CFLAGS_KCOV = -fsanitize-coverage=trace-pc
|
||||
|
||||
|
||||
# Use USERINCLUDE when you must reference the UAPI directories only.
|
||||
@ -411,7 +412,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
|
||||
export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
|
||||
|
||||
export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
|
||||
export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN CFLAGS_UBSAN
|
||||
export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KCOV CFLAGS_KASAN CFLAGS_UBSAN
|
||||
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
|
||||
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
|
||||
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
|
||||
@ -673,6 +674,14 @@ endif
|
||||
endif
|
||||
KBUILD_CFLAGS += $(stackp-flag)
|
||||
|
||||
ifdef CONFIG_KCOV
|
||||
ifeq ($(call cc-option, $(CFLAGS_KCOV)),)
|
||||
$(warning Cannot use CONFIG_KCOV: \
|
||||
-fsanitize-coverage=trace-pc is not supported by compiler)
|
||||
CFLAGS_KCOV =
|
||||
endif
|
||||
endif
|
||||
|
||||
ifeq ($(cc-name),clang)
|
||||
KBUILD_CPPFLAGS += $(call cc-option,-Qunused-arguments,)
|
||||
KBUILD_CPPFLAGS += $(call cc-option,-Wno-unknown-warning-option,)
|
||||
|
@ -483,7 +483,13 @@ struct exception_table_entry
|
||||
(pc) + (_fixup)->fixup.bits.nextinsn; \
|
||||
})
|
||||
|
||||
#define ARCH_HAS_SORT_EXTABLE
|
||||
#define ARCH_HAS_SEARCH_EXTABLE
|
||||
#define ARCH_HAS_RELATIVE_EXTABLE
|
||||
|
||||
#define swap_ex_entry_fixup(a, b, tmp, delta) \
|
||||
do { \
|
||||
(a)->fixup.unit = (b)->fixup.unit; \
|
||||
(b)->fixup.unit = (tmp).fixup.unit; \
|
||||
} while (0)
|
||||
|
||||
|
||||
#endif /* __ALPHA_UACCESS_H */
|
||||
|
@ -4,6 +4,6 @@
|
||||
|
||||
ccflags-y := -Werror
|
||||
|
||||
obj-y := init.o fault.o extable.o
|
||||
obj-y := init.o fault.o
|
||||
|
||||
obj-$(CONFIG_DISCONTIGMEM) += numa.o
|
||||
|
@ -1,92 +0,0 @@
|
||||
/*
|
||||
* linux/arch/alpha/mm/extable.c
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/sort.h>
|
||||
#include <asm/uaccess.h>
|
||||
|
||||
static inline unsigned long ex_to_addr(const struct exception_table_entry *x)
|
||||
{
|
||||
return (unsigned long)&x->insn + x->insn;
|
||||
}
|
||||
|
||||
static void swap_ex(void *a, void *b, int size)
|
||||
{
|
||||
struct exception_table_entry *ex_a = a, *ex_b = b;
|
||||
unsigned long addr_a = ex_to_addr(ex_a), addr_b = ex_to_addr(ex_b);
|
||||
unsigned int t = ex_a->fixup.unit;
|
||||
|
||||
ex_a->fixup.unit = ex_b->fixup.unit;
|
||||
ex_b->fixup.unit = t;
|
||||
ex_a->insn = (int)(addr_b - (unsigned long)&ex_a->insn);
|
||||
ex_b->insn = (int)(addr_a - (unsigned long)&ex_b->insn);
|
||||
}
|
||||
|
||||
/*
|
||||
* The exception table needs to be sorted so that the binary
|
||||
* search that we use to find entries in it works properly.
|
||||
* This is used both for the kernel exception table and for
|
||||
* the exception tables of modules that get loaded.
|
||||
*/
|
||||
static int cmp_ex(const void *a, const void *b)
|
||||
{
|
||||
const struct exception_table_entry *x = a, *y = b;
|
||||
|
||||
/* avoid overflow */
|
||||
if (ex_to_addr(x) > ex_to_addr(y))
|
||||
return 1;
|
||||
if (ex_to_addr(x) < ex_to_addr(y))
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void sort_extable(struct exception_table_entry *start,
|
||||
struct exception_table_entry *finish)
|
||||
{
|
||||
sort(start, finish - start, sizeof(struct exception_table_entry),
|
||||
cmp_ex, swap_ex);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
/*
|
||||
* Any entry referring to the module init will be at the beginning or
|
||||
* the end.
|
||||
*/
|
||||
void trim_init_extable(struct module *m)
|
||||
{
|
||||
/*trim the beginning*/
|
||||
while (m->num_exentries &&
|
||||
within_module_init(ex_to_addr(&m->extable[0]), m)) {
|
||||
m->extable++;
|
||||
m->num_exentries--;
|
||||
}
|
||||
/*trim the end*/
|
||||
while (m->num_exentries &&
|
||||
within_module_init(ex_to_addr(&m->extable[m->num_exentries-1]),
|
||||
m))
|
||||
m->num_exentries--;
|
||||
}
|
||||
#endif /* CONFIG_MODULES */
|
||||
|
||||
const struct exception_table_entry *
|
||||
search_extable(const struct exception_table_entry *first,
|
||||
const struct exception_table_entry *last,
|
||||
unsigned long value)
|
||||
{
|
||||
while (first <= last) {
|
||||
const struct exception_table_entry *mid;
|
||||
unsigned long mid_value;
|
||||
|
||||
mid = (last - first) / 2 + first;
|
||||
mid_value = ex_to_addr(mid);
|
||||
if (mid_value == value)
|
||||
return mid;
|
||||
else if (mid_value < value)
|
||||
first = mid+1;
|
||||
else
|
||||
last = mid-1;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
@ -341,13 +341,11 @@ extern unsigned long __strnlen_user (const char __user *, long);
|
||||
__su_ret; \
|
||||
})
|
||||
|
||||
/* Generic code can't deal with the location-relative format that we use for compactness. */
|
||||
#define ARCH_HAS_SORT_EXTABLE
|
||||
#define ARCH_HAS_SEARCH_EXTABLE
|
||||
#define ARCH_HAS_RELATIVE_EXTABLE
|
||||
|
||||
struct exception_table_entry {
|
||||
int addr; /* location-relative address of insn this fixup is for */
|
||||
int cont; /* location-relative continuation addr.; if bit 2 is set, r9 is set to 0 */
|
||||
int insn; /* location-relative address of insn this fixup is for */
|
||||
int fixup; /* location-relative continuation addr.; if bit 2 is set, r9 is set to 0 */
|
||||
};
|
||||
|
||||
extern void ia64_handle_exception (struct pt_regs *regs, const struct exception_table_entry *e);
|
||||
|
@ -5,107 +5,12 @@
|
||||
* David Mosberger-Tang <davidm@hpl.hp.com>
|
||||
*/
|
||||
|
||||
#include <linux/sort.h>
|
||||
|
||||
#include <asm/uaccess.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
static int cmp_ex(const void *a, const void *b)
|
||||
{
|
||||
const struct exception_table_entry *l = a, *r = b;
|
||||
u64 lip = (u64) &l->addr + l->addr;
|
||||
u64 rip = (u64) &r->addr + r->addr;
|
||||
|
||||
/* avoid overflow */
|
||||
if (lip > rip)
|
||||
return 1;
|
||||
if (lip < rip)
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void swap_ex(void *a, void *b, int size)
|
||||
{
|
||||
struct exception_table_entry *l = a, *r = b, tmp;
|
||||
u64 delta = (u64) r - (u64) l;
|
||||
|
||||
tmp = *l;
|
||||
l->addr = r->addr + delta;
|
||||
l->cont = r->cont + delta;
|
||||
r->addr = tmp.addr - delta;
|
||||
r->cont = tmp.cont - delta;
|
||||
}
|
||||
|
||||
/*
|
||||
* Sort the exception table. It's usually already sorted, but there
|
||||
* may be unordered entries due to multiple text sections (such as the
|
||||
* .init text section). Note that the exception-table-entries contain
|
||||
* location-relative addresses, which requires a bit of care during
|
||||
* sorting to avoid overflows in the offset members (e.g., it would
|
||||
* not be safe to make a temporary copy of an exception-table entry on
|
||||
* the stack, because the stack may be more than 2GB away from the
|
||||
* exception-table).
|
||||
*/
|
||||
void sort_extable (struct exception_table_entry *start,
|
||||
struct exception_table_entry *finish)
|
||||
{
|
||||
sort(start, finish - start, sizeof(struct exception_table_entry),
|
||||
cmp_ex, swap_ex);
|
||||
}
|
||||
|
||||
static inline unsigned long ex_to_addr(const struct exception_table_entry *x)
|
||||
{
|
||||
return (unsigned long)&x->addr + x->addr;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
/*
|
||||
* Any entry referring to the module init will be at the beginning or
|
||||
* the end.
|
||||
*/
|
||||
void trim_init_extable(struct module *m)
|
||||
{
|
||||
/*trim the beginning*/
|
||||
while (m->num_exentries &&
|
||||
within_module_init(ex_to_addr(&m->extable[0]), m)) {
|
||||
m->extable++;
|
||||
m->num_exentries--;
|
||||
}
|
||||
/*trim the end*/
|
||||
while (m->num_exentries &&
|
||||
within_module_init(ex_to_addr(&m->extable[m->num_exentries-1]),
|
||||
m))
|
||||
m->num_exentries--;
|
||||
}
|
||||
#endif /* CONFIG_MODULES */
|
||||
|
||||
const struct exception_table_entry *
|
||||
search_extable (const struct exception_table_entry *first,
|
||||
const struct exception_table_entry *last,
|
||||
unsigned long ip)
|
||||
{
|
||||
const struct exception_table_entry *mid;
|
||||
unsigned long mid_ip;
|
||||
long diff;
|
||||
|
||||
while (first <= last) {
|
||||
mid = &first[(last - first)/2];
|
||||
mid_ip = (u64) &mid->addr + mid->addr;
|
||||
diff = mid_ip - ip;
|
||||
if (diff == 0)
|
||||
return mid;
|
||||
else if (diff < 0)
|
||||
first = mid + 1;
|
||||
else
|
||||
last = mid - 1;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void
|
||||
ia64_handle_exception (struct pt_regs *regs, const struct exception_table_entry *e)
|
||||
{
|
||||
long fix = (u64) &e->cont + e->cont;
|
||||
long fix = (u64) &e->fixup + e->fixup;
|
||||
|
||||
regs->r8 = -EFAULT;
|
||||
if (fix & 4)
|
||||
|
@ -606,6 +606,12 @@ int fsl_rio_setup(struct platform_device *dev)
|
||||
if (!port)
|
||||
continue;
|
||||
|
||||
rc = rio_mport_initialize(port);
|
||||
if (rc) {
|
||||
kfree(port);
|
||||
continue;
|
||||
}
|
||||
|
||||
i = *port_index - 1;
|
||||
port->index = (unsigned char)i;
|
||||
|
||||
@ -682,12 +688,6 @@ int fsl_rio_setup(struct platform_device *dev)
|
||||
dev_info(&dev->dev, "RapidIO Common Transport System size: %d\n",
|
||||
port->sys_size ? 65536 : 256);
|
||||
|
||||
if (rio_register_mport(port)) {
|
||||
release_resource(&port->iores);
|
||||
kfree(priv);
|
||||
kfree(port);
|
||||
continue;
|
||||
}
|
||||
if (port->host_deviceid >= 0)
|
||||
out_be32(priv->regs_win + RIO_GCCSR, RIO_PORT_GEN_HOST |
|
||||
RIO_PORT_GEN_MASTER | RIO_PORT_GEN_DISCOVERED);
|
||||
@ -726,7 +726,14 @@ int fsl_rio_setup(struct platform_device *dev)
|
||||
fsl_rio_inbound_mem_init(priv);
|
||||
|
||||
dbell->mport[i] = port;
|
||||
pw->mport[i] = port;
|
||||
|
||||
if (rio_register_mport(port)) {
|
||||
release_resource(&port->iores);
|
||||
kfree(priv);
|
||||
kfree(port);
|
||||
continue;
|
||||
}
|
||||
active_ports++;
|
||||
}
|
||||
|
||||
|
@ -97,6 +97,7 @@ struct fsl_rio_dbell {
|
||||
};
|
||||
|
||||
struct fsl_rio_pw {
|
||||
struct rio_mport *mport[MAX_PORT_NUM];
|
||||
struct device *dev;
|
||||
struct rio_pw_regs __iomem *pw_regs;
|
||||
struct rio_port_write_msg port_write_msg;
|
||||
|
@ -481,14 +481,14 @@ fsl_rio_port_write_handler(int irq, void *dev_instance)
|
||||
static void fsl_pw_dpc(struct work_struct *work)
|
||||
{
|
||||
struct fsl_rio_pw *pw = container_of(work, struct fsl_rio_pw, pw_work);
|
||||
u32 msg_buffer[RIO_PW_MSG_SIZE/sizeof(u32)];
|
||||
union rio_pw_msg msg_buffer;
|
||||
int i;
|
||||
|
||||
/*
|
||||
* Process port-write messages
|
||||
*/
|
||||
while (kfifo_out_spinlocked(&pw->pw_fifo, (unsigned char *)msg_buffer,
|
||||
while (kfifo_out_spinlocked(&pw->pw_fifo, (unsigned char *)&msg_buffer,
|
||||
RIO_PW_MSG_SIZE, &pw->pw_fifo_lock)) {
|
||||
/* Process one message */
|
||||
#ifdef DEBUG_PW
|
||||
{
|
||||
u32 i;
|
||||
@ -496,15 +496,19 @@ static void fsl_pw_dpc(struct work_struct *work)
|
||||
for (i = 0; i < RIO_PW_MSG_SIZE/sizeof(u32); i++) {
|
||||
if ((i%4) == 0)
|
||||
pr_debug("\n0x%02x: 0x%08x", i*4,
|
||||
msg_buffer[i]);
|
||||
msg_buffer.raw[i]);
|
||||
else
|
||||
pr_debug(" 0x%08x", msg_buffer[i]);
|
||||
pr_debug(" 0x%08x", msg_buffer.raw[i]);
|
||||
}
|
||||
pr_debug("\n");
|
||||
}
|
||||
#endif
|
||||
/* Pass the port-write message to RIO core for processing */
|
||||
rio_inb_pwrite_handler((union rio_pw_msg *)msg_buffer);
|
||||
for (i = 0; i < MAX_PORT_NUM; i++) {
|
||||
if (pw->mport[i])
|
||||
rio_inb_pwrite_handler(pw->mport[i],
|
||||
&msg_buffer);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -79,18 +79,12 @@ struct exception_table_entry
|
||||
int insn, fixup;
|
||||
};
|
||||
|
||||
static inline unsigned long extable_insn(const struct exception_table_entry *x)
|
||||
{
|
||||
return (unsigned long)&x->insn + x->insn;
|
||||
}
|
||||
|
||||
static inline unsigned long extable_fixup(const struct exception_table_entry *x)
|
||||
{
|
||||
return (unsigned long)&x->fixup + x->fixup;
|
||||
}
|
||||
|
||||
#define ARCH_HAS_SORT_EXTABLE
|
||||
#define ARCH_HAS_SEARCH_EXTABLE
|
||||
#define ARCH_HAS_RELATIVE_EXTABLE
|
||||
|
||||
/**
|
||||
* __copy_from_user: - Copy a block of data from user space, with less checking.
|
||||
|
@ -3,7 +3,7 @@
|
||||
#
|
||||
|
||||
obj-y := init.o fault.o extmem.o mmap.o vmem.o maccess.o
|
||||
obj-y += page-states.o gup.o extable.o pageattr.o mem_detect.o
|
||||
obj-y += page-states.o gup.o pageattr.o mem_detect.o
|
||||
obj-y += pgtable.o pgalloc.o
|
||||
|
||||
obj-$(CONFIG_CMM) += cmm.o
|
||||
|
@ -1,85 +0,0 @@
|
||||
#include <linux/module.h>
|
||||
#include <linux/sort.h>
|
||||
#include <asm/uaccess.h>
|
||||
|
||||
/*
|
||||
* Search one exception table for an entry corresponding to the
|
||||
* given instruction address, and return the address of the entry,
|
||||
* or NULL if none is found.
|
||||
* We use a binary search, and thus we assume that the table is
|
||||
* already sorted.
|
||||
*/
|
||||
const struct exception_table_entry *
|
||||
search_extable(const struct exception_table_entry *first,
|
||||
const struct exception_table_entry *last,
|
||||
unsigned long value)
|
||||
{
|
||||
const struct exception_table_entry *mid;
|
||||
unsigned long addr;
|
||||
|
||||
while (first <= last) {
|
||||
mid = ((last - first) >> 1) + first;
|
||||
addr = extable_insn(mid);
|
||||
if (addr < value)
|
||||
first = mid + 1;
|
||||
else if (addr > value)
|
||||
last = mid - 1;
|
||||
else
|
||||
return mid;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* The exception table needs to be sorted so that the binary
|
||||
* search that we use to find entries in it works properly.
|
||||
* This is used both for the kernel exception table and for
|
||||
* the exception tables of modules that get loaded.
|
||||
*
|
||||
*/
|
||||
static int cmp_ex(const void *a, const void *b)
|
||||
{
|
||||
const struct exception_table_entry *x = a, *y = b;
|
||||
|
||||
/* This compare is only valid after normalization. */
|
||||
return x->insn - y->insn;
|
||||
}
|
||||
|
||||
void sort_extable(struct exception_table_entry *start,
|
||||
struct exception_table_entry *finish)
|
||||
{
|
||||
struct exception_table_entry *p;
|
||||
int i;
|
||||
|
||||
/* Normalize entries to being relative to the start of the section */
|
||||
for (p = start, i = 0; p < finish; p++, i += 8) {
|
||||
p->insn += i;
|
||||
p->fixup += i + 4;
|
||||
}
|
||||
sort(start, finish - start, sizeof(*start), cmp_ex, NULL);
|
||||
/* Denormalize all entries */
|
||||
for (p = start, i = 0; p < finish; p++, i += 8) {
|
||||
p->insn -= i;
|
||||
p->fixup -= i + 4;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
/*
|
||||
* If the exception table is sorted, any referring to the module init
|
||||
* will be at the beginning or the end.
|
||||
*/
|
||||
void trim_init_extable(struct module *m)
|
||||
{
|
||||
/* Trim the beginning */
|
||||
while (m->num_exentries &&
|
||||
within_module_init(extable_insn(&m->extable[0]), m)) {
|
||||
m->extable++;
|
||||
m->num_exentries--;
|
||||
}
|
||||
/* Trim the end */
|
||||
while (m->num_exentries &&
|
||||
within_module_init(extable_insn(&m->extable[m->num_exentries-1]), m))
|
||||
m->num_exentries--;
|
||||
}
|
||||
#endif /* CONFIG_MODULES */
|
@ -307,4 +307,11 @@ static inline int is_compat_task(void)
|
||||
return test_thread_flag(TIF_32BIT);
|
||||
}
|
||||
|
||||
static inline bool in_compat_syscall(void)
|
||||
{
|
||||
/* Vector 0x110 is LINUX_32BIT_SYSCALL_TRAP */
|
||||
return pt_regs_trap_type(current_pt_regs()) == 0x110;
|
||||
}
|
||||
#define in_compat_syscall in_compat_syscall
|
||||
|
||||
#endif /* _ASM_SPARC64_COMPAT_H */
|
||||
|
@ -3,6 +3,7 @@
|
||||
|
||||
#include <uapi/linux/audit.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/compat.h>
|
||||
#include <linux/sched.h>
|
||||
#include <asm/ptrace.h>
|
||||
#include <asm/thread_info.h>
|
||||
@ -128,7 +129,13 @@ static inline void syscall_set_arguments(struct task_struct *task,
|
||||
|
||||
static inline int syscall_get_arch(void)
|
||||
{
|
||||
return is_32bit_task() ? AUDIT_ARCH_SPARC : AUDIT_ARCH_SPARC64;
|
||||
#if defined(CONFIG_SPARC64) && defined(CONFIG_COMPAT)
|
||||
return in_compat_syscall() ? AUDIT_ARCH_SPARC : AUDIT_ARCH_SPARC64;
|
||||
#elif defined(CONFIG_SPARC64)
|
||||
return AUDIT_ARCH_SPARC64;
|
||||
#else
|
||||
return AUDIT_ARCH_SPARC;
|
||||
#endif
|
||||
}
|
||||
|
||||
#endif /* __ASM_SPARC_SYSCALL_H */
|
||||
|
@ -133,7 +133,7 @@ void mconsole_proc(struct mc_request *req)
|
||||
ptr += strlen("proc");
|
||||
ptr = skip_spaces(ptr);
|
||||
|
||||
file = file_open_root(mnt->mnt_root, mnt, ptr, O_RDONLY);
|
||||
file = file_open_root(mnt->mnt_root, mnt, ptr, O_RDONLY, 0);
|
||||
if (IS_ERR(file)) {
|
||||
mconsole_reply(req, "Failed to open file", 1, 0);
|
||||
printk(KERN_ERR "open /proc/%s: %ld\n", ptr, PTR_ERR(file));
|
||||
|
@ -28,6 +28,7 @@ config X86
|
||||
select ARCH_HAS_ELF_RANDOMIZE
|
||||
select ARCH_HAS_FAST_MULTIPLIER
|
||||
select ARCH_HAS_GCOV_PROFILE_ALL
|
||||
select ARCH_HAS_KCOV if X86_64
|
||||
select ARCH_HAS_PMEM_API if X86_64
|
||||
select ARCH_HAS_MMIO_FLUSH
|
||||
select ARCH_HAS_SG_CHAIN
|
||||
|
@ -12,6 +12,13 @@
|
||||
KASAN_SANITIZE := n
|
||||
OBJECT_FILES_NON_STANDARD := y
|
||||
|
||||
# Kernel does not boot with kcov instrumentation here.
|
||||
# One of the problems observed was insertion of __sanitizer_cov_trace_pc()
|
||||
# callback into middle of per-cpu data enabling code. Thus the callback observed
|
||||
# inconsistent state and crashed. We are interested mostly in syscall coverage,
|
||||
# so boot code is not interesting anyway.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
# If you want to preset the SVGA mode, uncomment the next line and
|
||||
# set SVGA_MODE to whatever number you want.
|
||||
# Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode.
|
||||
|
@ -19,6 +19,9 @@
|
||||
KASAN_SANITIZE := n
|
||||
OBJECT_FILES_NON_STANDARD := y
|
||||
|
||||
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
|
||||
vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
|
||||
|
||||
|
@ -7,6 +7,9 @@ KASAN_SANITIZE := n
|
||||
UBSAN_SANITIZE := n
|
||||
OBJECT_FILES_NON_STANDARD := y
|
||||
|
||||
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
VDSO64-$(CONFIG_X86_64) := y
|
||||
VDSOX32-$(CONFIG_X86_X32_ABI) := y
|
||||
VDSO32-$(CONFIG_X86_32) := y
|
||||
|
@ -316,9 +316,10 @@ static inline bool is_x32_task(void)
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool is_compat_task(void)
|
||||
static inline bool in_compat_syscall(void)
|
||||
{
|
||||
return is_ia32_task() || is_x32_task();
|
||||
}
|
||||
#define in_compat_syscall in_compat_syscall /* override the generic impl */
|
||||
|
||||
#endif /* _ASM_X86_COMPAT_H */
|
||||
|
@ -58,7 +58,7 @@ int ftrace_int3_handler(struct pt_regs *regs);
|
||||
#define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS 1
|
||||
static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs)
|
||||
{
|
||||
if (is_compat_task())
|
||||
if (in_compat_syscall())
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
@ -105,9 +105,8 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
|
||||
struct exception_table_entry {
|
||||
int insn, fixup, handler;
|
||||
};
|
||||
/* This is not the generic standard exception_table_entry format */
|
||||
#define ARCH_HAS_SORT_EXTABLE
|
||||
#define ARCH_HAS_SEARCH_EXTABLE
|
||||
|
||||
#define ARCH_HAS_RELATIVE_EXTABLE
|
||||
|
||||
extern int fixup_exception(struct pt_regs *regs, int trapnr);
|
||||
extern bool ex_has_fault_handler(unsigned long ip);
|
||||
|
@ -25,6 +25,12 @@ OBJECT_FILES_NON_STANDARD_relocate_kernel_$(BITS).o := y
|
||||
OBJECT_FILES_NON_STANDARD_mcount_$(BITS).o := y
|
||||
OBJECT_FILES_NON_STANDARD_test_nx.o := y
|
||||
|
||||
# If instrumentation of this dir is enabled, boot hangs during first second.
|
||||
# Probably could be more selective here, but note that files related to irqs,
|
||||
# boot, dumpstack/stacktrace, etc are either non-interesting or can lead to
|
||||
# non-deterministic coverage.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
CFLAGS_irq.o := -I$(src)/../include/asm/trace
|
||||
|
||||
obj-y := process_$(BITS).o signal.o
|
||||
|
@ -2,6 +2,10 @@
|
||||
# Makefile for local APIC drivers and for the IO-APIC code
|
||||
#
|
||||
|
||||
# Leads to non-deterministic coverage that is not a function of syscall inputs.
|
||||
# In particualr, smp_apic_timer_interrupt() is called in random places.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
obj-$(CONFIG_X86_LOCAL_APIC) += apic.o apic_noop.o ipi.o vector.o
|
||||
obj-y += hw_nmi.o
|
||||
|
||||
|
@ -8,6 +8,10 @@ CFLAGS_REMOVE_common.o = -pg
|
||||
CFLAGS_REMOVE_perf_event.o = -pg
|
||||
endif
|
||||
|
||||
# If these files are instrumented, boot hangs during the first second.
|
||||
KCOV_INSTRUMENT_common.o := n
|
||||
KCOV_INSTRUMENT_perf_event.o := n
|
||||
|
||||
# Make sure load_percpu_segment has no stackprotector
|
||||
nostackp := $(call cc-option, -fno-stack-protector)
|
||||
CFLAGS_common.o := $(nostackp)
|
||||
|
@ -478,7 +478,7 @@ void set_personality_ia32(bool x32)
|
||||
if (current->mm)
|
||||
current->mm->context.ia32_compat = TIF_X32;
|
||||
current->personality &= ~READ_IMPLIES_EXEC;
|
||||
/* is_compat_task() uses the presence of the x32
|
||||
/* in_compat_syscall() uses the presence of the x32
|
||||
syscall bit flag to determine compat status */
|
||||
current_thread_info()->status &= ~TS_COMPAT;
|
||||
} else {
|
||||
|
@ -2,6 +2,9 @@
|
||||
# Makefile for x86 specific library files.
|
||||
#
|
||||
|
||||
# Produces uninteresting flaky coverage.
|
||||
KCOV_INSTRUMENT_delay.o := n
|
||||
|
||||
inat_tables_script = $(srctree)/arch/x86/tools/gen-insn-attr-x86.awk
|
||||
inat_tables_maps = $(srctree)/arch/x86/lib/x86-opcode-map.txt
|
||||
quiet_cmd_inat_tables = GEN $@
|
||||
|
@ -1,3 +1,6 @@
|
||||
# Kernel does not boot with instrumentation of tlb.c.
|
||||
KCOV_INSTRUMENT_tlb.o := n
|
||||
|
||||
obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
|
||||
pat.o pgtable.o physaddr.o gup.o setup_nx.o
|
||||
|
||||
|
@ -1,16 +1,9 @@
|
||||
#include <linux/module.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/sort.h>
|
||||
#include <asm/uaccess.h>
|
||||
|
||||
typedef bool (*ex_handler_t)(const struct exception_table_entry *,
|
||||
struct pt_regs *, int);
|
||||
|
||||
static inline unsigned long
|
||||
ex_insn_addr(const struct exception_table_entry *x)
|
||||
{
|
||||
return (unsigned long)&x->insn + x->insn;
|
||||
}
|
||||
static inline unsigned long
|
||||
ex_fixup_addr(const struct exception_table_entry *x)
|
||||
{
|
||||
@ -110,104 +103,3 @@ int __init early_fixup_exception(unsigned long *ip)
|
||||
*ip = new_ip;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Search one exception table for an entry corresponding to the
|
||||
* given instruction address, and return the address of the entry,
|
||||
* or NULL if none is found.
|
||||
* We use a binary search, and thus we assume that the table is
|
||||
* already sorted.
|
||||
*/
|
||||
const struct exception_table_entry *
|
||||
search_extable(const struct exception_table_entry *first,
|
||||
const struct exception_table_entry *last,
|
||||
unsigned long value)
|
||||
{
|
||||
while (first <= last) {
|
||||
const struct exception_table_entry *mid;
|
||||
unsigned long addr;
|
||||
|
||||
mid = ((last - first) >> 1) + first;
|
||||
addr = ex_insn_addr(mid);
|
||||
if (addr < value)
|
||||
first = mid + 1;
|
||||
else if (addr > value)
|
||||
last = mid - 1;
|
||||
else
|
||||
return mid;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* The exception table needs to be sorted so that the binary
|
||||
* search that we use to find entries in it works properly.
|
||||
* This is used both for the kernel exception table and for
|
||||
* the exception tables of modules that get loaded.
|
||||
*
|
||||
*/
|
||||
static int cmp_ex(const void *a, const void *b)
|
||||
{
|
||||
const struct exception_table_entry *x = a, *y = b;
|
||||
|
||||
/*
|
||||
* This value will always end up fittin in an int, because on
|
||||
* both i386 and x86-64 the kernel symbol-reachable address
|
||||
* space is < 2 GiB.
|
||||
*
|
||||
* This compare is only valid after normalization.
|
||||
*/
|
||||
return x->insn - y->insn;
|
||||
}
|
||||
|
||||
void sort_extable(struct exception_table_entry *start,
|
||||
struct exception_table_entry *finish)
|
||||
{
|
||||
struct exception_table_entry *p;
|
||||
int i;
|
||||
|
||||
/* Convert all entries to being relative to the start of the section */
|
||||
i = 0;
|
||||
for (p = start; p < finish; p++) {
|
||||
p->insn += i;
|
||||
i += 4;
|
||||
p->fixup += i;
|
||||
i += 4;
|
||||
p->handler += i;
|
||||
i += 4;
|
||||
}
|
||||
|
||||
sort(start, finish - start, sizeof(struct exception_table_entry),
|
||||
cmp_ex, NULL);
|
||||
|
||||
/* Denormalize all entries */
|
||||
i = 0;
|
||||
for (p = start; p < finish; p++) {
|
||||
p->insn -= i;
|
||||
i += 4;
|
||||
p->fixup -= i;
|
||||
i += 4;
|
||||
p->handler -= i;
|
||||
i += 4;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
/*
|
||||
* If the exception table is sorted, any referring to the module init
|
||||
* will be at the beginning or the end.
|
||||
*/
|
||||
void trim_init_extable(struct module *m)
|
||||
{
|
||||
/*trim the beginning*/
|
||||
while (m->num_exentries &&
|
||||
within_module_init(ex_insn_addr(&m->extable[0]), m)) {
|
||||
m->extable++;
|
||||
m->num_exentries--;
|
||||
}
|
||||
/*trim the end*/
|
||||
while (m->num_exentries &&
|
||||
within_module_init(ex_insn_addr(&m->extable[m->num_exentries-1]), m))
|
||||
m->num_exentries--;
|
||||
}
|
||||
#endif /* CONFIG_MODULES */
|
||||
|
@ -9,6 +9,9 @@
|
||||
KASAN_SANITIZE := n
|
||||
OBJECT_FILES_NON_STANDARD := y
|
||||
|
||||
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
always := realmode.bin realmode.relocs
|
||||
|
||||
wakeup-objs := wakeup_asm.o wakemain.o video-mode.o
|
||||
|
@ -2,6 +2,7 @@
|
||||
* Coherent per-device memory handling.
|
||||
* Borrowed from i386
|
||||
*/
|
||||
#include <linux/io.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
@ -31,7 +32,10 @@ static bool dma_init_coherent_memory(
|
||||
if (!size)
|
||||
goto out;
|
||||
|
||||
mem_base = ioremap(phys_addr, size);
|
||||
if (flags & DMA_MEMORY_MAP)
|
||||
mem_base = memremap(phys_addr, size, MEMREMAP_WC);
|
||||
else
|
||||
mem_base = ioremap(phys_addr, size);
|
||||
if (!mem_base)
|
||||
goto out;
|
||||
|
||||
@ -54,8 +58,12 @@ static bool dma_init_coherent_memory(
|
||||
|
||||
out:
|
||||
kfree(dma_mem);
|
||||
if (mem_base)
|
||||
iounmap(mem_base);
|
||||
if (mem_base) {
|
||||
if (flags & DMA_MEMORY_MAP)
|
||||
memunmap(mem_base);
|
||||
else
|
||||
iounmap(mem_base);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -63,7 +71,11 @@ static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
|
||||
{
|
||||
if (!mem)
|
||||
return;
|
||||
iounmap(mem->virt_base);
|
||||
|
||||
if (mem->flags & DMA_MEMORY_MAP)
|
||||
memunmap(mem->virt_base);
|
||||
else
|
||||
iounmap(mem->virt_base);
|
||||
kfree(mem->bitmap);
|
||||
kfree(mem);
|
||||
}
|
||||
@ -175,7 +187,10 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
|
||||
*/
|
||||
*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
|
||||
*ret = mem->virt_base + (pageno << PAGE_SHIFT);
|
||||
memset(*ret, 0, size);
|
||||
if (mem->flags & DMA_MEMORY_MAP)
|
||||
memset(*ret, 0, size);
|
||||
else
|
||||
memset_io(*ret, 0, size);
|
||||
spin_unlock_irqrestore(&mem->spinlock, flags);
|
||||
|
||||
return 1;
|
||||
|
@ -1140,7 +1140,7 @@ ipmi_nmi(unsigned int val, struct pt_regs *regs)
|
||||
the timer. So do so. */
|
||||
pretimeout_since_last_heartbeat = 1;
|
||||
if (atomic_inc_and_test(&preop_panic_excl))
|
||||
panic(PFX "pre-timeout");
|
||||
nmi_panic(regs, PFX "pre-timeout");
|
||||
}
|
||||
|
||||
return NMI_HANDLED;
|
||||
|
@ -221,7 +221,7 @@ struct inbound_phy_packet_event {
|
||||
#ifdef CONFIG_COMPAT
|
||||
static void __user *u64_to_uptr(u64 value)
|
||||
{
|
||||
if (is_compat_task())
|
||||
if (in_compat_syscall())
|
||||
return compat_ptr(value);
|
||||
else
|
||||
return (void __user *)(unsigned long)value;
|
||||
@ -229,7 +229,7 @@ static void __user *u64_to_uptr(u64 value)
|
||||
|
||||
static u64 uptr_to_u64(void __user *ptr)
|
||||
{
|
||||
if (is_compat_task())
|
||||
if (in_compat_syscall())
|
||||
return ptr_to_compat(ptr);
|
||||
else
|
||||
return (u64)(unsigned long)ptr;
|
||||
|
@ -231,7 +231,7 @@ sanity_check(struct efi_variable *var, efi_char16_t *name, efi_guid_t vendor,
|
||||
|
||||
static inline bool is_compat(void)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_COMPAT) && is_compat_task())
|
||||
if (IS_ENABLED(CONFIG_COMPAT) && in_compat_syscall())
|
||||
return true;
|
||||
|
||||
return false;
|
||||
|
@ -25,6 +25,9 @@ KASAN_SANITIZE := n
|
||||
UBSAN_SANITIZE := n
|
||||
OBJECT_FILES_NON_STANDARD := y
|
||||
|
||||
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
lib-y := efi-stub-helper.o
|
||||
|
||||
# include the stub's generic dependencies from lib/ when building for ARM/arm64
|
||||
|
@ -107,7 +107,7 @@ static int kfd_open(struct inode *inode, struct file *filep)
|
||||
if (iminor(inode) != 0)
|
||||
return -ENODEV;
|
||||
|
||||
is_32bit_user_mode = is_compat_task();
|
||||
is_32bit_user_mode = in_compat_syscall();
|
||||
|
||||
if (is_32bit_user_mode == true) {
|
||||
dev_warn(kfd_device,
|
||||
|
@ -311,7 +311,7 @@ static struct kfd_process *create_process(const struct task_struct *thread)
|
||||
goto err_process_pqm_init;
|
||||
|
||||
/* init process apertures*/
|
||||
process->is_32bit_user_mode = is_compat_task();
|
||||
process->is_32bit_user_mode = in_compat_syscall();
|
||||
if (kfd_init_apertures(process) != 0)
|
||||
goto err_init_apretures;
|
||||
|
||||
|
@ -384,7 +384,7 @@ struct uhid_create_req_compat {
|
||||
static int uhid_event_from_user(const char __user *buffer, size_t len,
|
||||
struct uhid_event *event)
|
||||
{
|
||||
if (is_compat_task()) {
|
||||
if (in_compat_syscall()) {
|
||||
u32 type;
|
||||
|
||||
if (get_user(type, buffer))
|
||||
|
@ -17,17 +17,7 @@
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
/* Note to the author of this code: did it ever occur to
|
||||
you why the ifdefs are needed? Think about it again. -AK */
|
||||
#if defined(CONFIG_X86_64) || defined(CONFIG_TILE)
|
||||
# define INPUT_COMPAT_TEST is_compat_task()
|
||||
#elif defined(CONFIG_S390)
|
||||
# define INPUT_COMPAT_TEST test_thread_flag(TIF_31BIT)
|
||||
#elif defined(CONFIG_MIPS)
|
||||
# define INPUT_COMPAT_TEST test_thread_flag(TIF_32BIT_ADDR)
|
||||
#else
|
||||
# define INPUT_COMPAT_TEST test_thread_flag(TIF_32BIT)
|
||||
#endif
|
||||
#define INPUT_COMPAT_TEST in_compat_syscall()
|
||||
|
||||
struct input_event_compat {
|
||||
struct compat_timeval time;
|
||||
|
@ -24,6 +24,7 @@
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/crc32.h>
|
||||
#include <linux/ethtool.h>
|
||||
#include <linux/reboot.h>
|
||||
|
||||
#define DRV_NAME "rionet"
|
||||
#define DRV_VERSION "0.3"
|
||||
@ -48,6 +49,8 @@ MODULE_LICENSE("GPL");
|
||||
#define RIONET_TX_RING_SIZE CONFIG_RIONET_TX_SIZE
|
||||
#define RIONET_RX_RING_SIZE CONFIG_RIONET_RX_SIZE
|
||||
#define RIONET_MAX_NETS 8
|
||||
#define RIONET_MSG_SIZE RIO_MAX_MSG_SIZE
|
||||
#define RIONET_MAX_MTU (RIONET_MSG_SIZE - ETH_HLEN)
|
||||
|
||||
struct rionet_private {
|
||||
struct rio_mport *mport;
|
||||
@ -60,6 +63,7 @@ struct rionet_private {
|
||||
spinlock_t lock;
|
||||
spinlock_t tx_lock;
|
||||
u32 msg_enable;
|
||||
bool open;
|
||||
};
|
||||
|
||||
struct rionet_peer {
|
||||
@ -71,6 +75,7 @@ struct rionet_peer {
|
||||
struct rionet_net {
|
||||
struct net_device *ndev;
|
||||
struct list_head peers;
|
||||
spinlock_t lock; /* net info access lock */
|
||||
struct rio_dev **active;
|
||||
int nact; /* number of active peers */
|
||||
};
|
||||
@ -232,26 +237,32 @@ static void rionet_dbell_event(struct rio_mport *mport, void *dev_id, u16 sid, u
|
||||
struct net_device *ndev = dev_id;
|
||||
struct rionet_private *rnet = netdev_priv(ndev);
|
||||
struct rionet_peer *peer;
|
||||
unsigned char netid = rnet->mport->id;
|
||||
|
||||
if (netif_msg_intr(rnet))
|
||||
printk(KERN_INFO "%s: doorbell sid %4.4x tid %4.4x info %4.4x",
|
||||
DRV_NAME, sid, tid, info);
|
||||
if (info == RIONET_DOORBELL_JOIN) {
|
||||
if (!nets[rnet->mport->id].active[sid]) {
|
||||
list_for_each_entry(peer,
|
||||
&nets[rnet->mport->id].peers, node) {
|
||||
if (!nets[netid].active[sid]) {
|
||||
spin_lock(&nets[netid].lock);
|
||||
list_for_each_entry(peer, &nets[netid].peers, node) {
|
||||
if (peer->rdev->destid == sid) {
|
||||
nets[rnet->mport->id].active[sid] =
|
||||
peer->rdev;
|
||||
nets[rnet->mport->id].nact++;
|
||||
nets[netid].active[sid] = peer->rdev;
|
||||
nets[netid].nact++;
|
||||
}
|
||||
}
|
||||
spin_unlock(&nets[netid].lock);
|
||||
|
||||
rio_mport_send_doorbell(mport, sid,
|
||||
RIONET_DOORBELL_JOIN);
|
||||
}
|
||||
} else if (info == RIONET_DOORBELL_LEAVE) {
|
||||
nets[rnet->mport->id].active[sid] = NULL;
|
||||
nets[rnet->mport->id].nact--;
|
||||
spin_lock(&nets[netid].lock);
|
||||
if (nets[netid].active[sid]) {
|
||||
nets[netid].active[sid] = NULL;
|
||||
nets[netid].nact--;
|
||||
}
|
||||
spin_unlock(&nets[netid].lock);
|
||||
} else {
|
||||
if (netif_msg_intr(rnet))
|
||||
printk(KERN_WARNING "%s: unhandled doorbell\n",
|
||||
@ -280,7 +291,7 @@ static void rionet_outb_msg_event(struct rio_mport *mport, void *dev_id, int mbo
|
||||
struct net_device *ndev = dev_id;
|
||||
struct rionet_private *rnet = netdev_priv(ndev);
|
||||
|
||||
spin_lock(&rnet->lock);
|
||||
spin_lock(&rnet->tx_lock);
|
||||
|
||||
if (netif_msg_intr(rnet))
|
||||
printk(KERN_INFO
|
||||
@ -299,14 +310,16 @@ static void rionet_outb_msg_event(struct rio_mport *mport, void *dev_id, int mbo
|
||||
if (rnet->tx_cnt < RIONET_TX_RING_SIZE)
|
||||
netif_wake_queue(ndev);
|
||||
|
||||
spin_unlock(&rnet->lock);
|
||||
spin_unlock(&rnet->tx_lock);
|
||||
}
|
||||
|
||||
static int rionet_open(struct net_device *ndev)
|
||||
{
|
||||
int i, rc = 0;
|
||||
struct rionet_peer *peer, *tmp;
|
||||
struct rionet_peer *peer;
|
||||
struct rionet_private *rnet = netdev_priv(ndev);
|
||||
unsigned char netid = rnet->mport->id;
|
||||
unsigned long flags;
|
||||
|
||||
if (netif_msg_ifup(rnet))
|
||||
printk(KERN_INFO "%s: open\n", DRV_NAME);
|
||||
@ -345,20 +358,13 @@ static int rionet_open(struct net_device *ndev)
|
||||
netif_carrier_on(ndev);
|
||||
netif_start_queue(ndev);
|
||||
|
||||
list_for_each_entry_safe(peer, tmp,
|
||||
&nets[rnet->mport->id].peers, node) {
|
||||
if (!(peer->res = rio_request_outb_dbell(peer->rdev,
|
||||
RIONET_DOORBELL_JOIN,
|
||||
RIONET_DOORBELL_LEAVE)))
|
||||
{
|
||||
printk(KERN_ERR "%s: error requesting doorbells\n",
|
||||
DRV_NAME);
|
||||
continue;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&nets[netid].lock, flags);
|
||||
list_for_each_entry(peer, &nets[netid].peers, node) {
|
||||
/* Send a join message */
|
||||
rio_send_doorbell(peer->rdev, RIONET_DOORBELL_JOIN);
|
||||
}
|
||||
spin_unlock_irqrestore(&nets[netid].lock, flags);
|
||||
rnet->open = true;
|
||||
|
||||
out:
|
||||
return rc;
|
||||
@ -367,7 +373,9 @@ static int rionet_open(struct net_device *ndev)
|
||||
static int rionet_close(struct net_device *ndev)
|
||||
{
|
||||
struct rionet_private *rnet = netdev_priv(ndev);
|
||||
struct rionet_peer *peer, *tmp;
|
||||
struct rionet_peer *peer;
|
||||
unsigned char netid = rnet->mport->id;
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
if (netif_msg_ifup(rnet))
|
||||
@ -375,18 +383,21 @@ static int rionet_close(struct net_device *ndev)
|
||||
|
||||
netif_stop_queue(ndev);
|
||||
netif_carrier_off(ndev);
|
||||
rnet->open = false;
|
||||
|
||||
for (i = 0; i < RIONET_RX_RING_SIZE; i++)
|
||||
kfree_skb(rnet->rx_skb[i]);
|
||||
|
||||
list_for_each_entry_safe(peer, tmp,
|
||||
&nets[rnet->mport->id].peers, node) {
|
||||
if (nets[rnet->mport->id].active[peer->rdev->destid]) {
|
||||
spin_lock_irqsave(&nets[netid].lock, flags);
|
||||
list_for_each_entry(peer, &nets[netid].peers, node) {
|
||||
if (nets[netid].active[peer->rdev->destid]) {
|
||||
rio_send_doorbell(peer->rdev, RIONET_DOORBELL_LEAVE);
|
||||
nets[rnet->mport->id].active[peer->rdev->destid] = NULL;
|
||||
nets[netid].active[peer->rdev->destid] = NULL;
|
||||
}
|
||||
rio_release_outb_dbell(peer->rdev, peer->res);
|
||||
if (peer->res)
|
||||
rio_release_outb_dbell(peer->rdev, peer->res);
|
||||
}
|
||||
spin_unlock_irqrestore(&nets[netid].lock, flags);
|
||||
|
||||
rio_release_inb_dbell(rnet->mport, RIONET_DOORBELL_JOIN,
|
||||
RIONET_DOORBELL_LEAVE);
|
||||
@ -400,22 +411,38 @@ static void rionet_remove_dev(struct device *dev, struct subsys_interface *sif)
|
||||
{
|
||||
struct rio_dev *rdev = to_rio_dev(dev);
|
||||
unsigned char netid = rdev->net->hport->id;
|
||||
struct rionet_peer *peer, *tmp;
|
||||
struct rionet_peer *peer;
|
||||
int state, found = 0;
|
||||
unsigned long flags;
|
||||
|
||||
if (dev_rionet_capable(rdev)) {
|
||||
list_for_each_entry_safe(peer, tmp, &nets[netid].peers, node) {
|
||||
if (peer->rdev == rdev) {
|
||||
if (nets[netid].active[rdev->destid]) {
|
||||
nets[netid].active[rdev->destid] = NULL;
|
||||
nets[netid].nact--;
|
||||
if (!dev_rionet_capable(rdev))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&nets[netid].lock, flags);
|
||||
list_for_each_entry(peer, &nets[netid].peers, node) {
|
||||
if (peer->rdev == rdev) {
|
||||
list_del(&peer->node);
|
||||
if (nets[netid].active[rdev->destid]) {
|
||||
state = atomic_read(&rdev->state);
|
||||
if (state != RIO_DEVICE_GONE &&
|
||||
state != RIO_DEVICE_INITIALIZING) {
|
||||
rio_send_doorbell(rdev,
|
||||
RIONET_DOORBELL_LEAVE);
|
||||
}
|
||||
|
||||
list_del(&peer->node);
|
||||
kfree(peer);
|
||||
break;
|
||||
nets[netid].active[rdev->destid] = NULL;
|
||||
nets[netid].nact--;
|
||||
}
|
||||
found = 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&nets[netid].lock, flags);
|
||||
|
||||
if (found) {
|
||||
if (peer->res)
|
||||
rio_release_outb_dbell(rdev, peer->res);
|
||||
kfree(peer);
|
||||
}
|
||||
}
|
||||
|
||||
static void rionet_get_drvinfo(struct net_device *ndev,
|
||||
@ -443,6 +470,17 @@ static void rionet_set_msglevel(struct net_device *ndev, u32 value)
|
||||
rnet->msg_enable = value;
|
||||
}
|
||||
|
||||
static int rionet_change_mtu(struct net_device *ndev, int new_mtu)
|
||||
{
|
||||
if ((new_mtu < 68) || (new_mtu > RIONET_MAX_MTU)) {
|
||||
printk(KERN_ERR "%s: Invalid MTU size %d\n",
|
||||
ndev->name, new_mtu);
|
||||
return -EINVAL;
|
||||
}
|
||||
ndev->mtu = new_mtu;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct ethtool_ops rionet_ethtool_ops = {
|
||||
.get_drvinfo = rionet_get_drvinfo,
|
||||
.get_msglevel = rionet_get_msglevel,
|
||||
@ -454,7 +492,7 @@ static const struct net_device_ops rionet_netdev_ops = {
|
||||
.ndo_open = rionet_open,
|
||||
.ndo_stop = rionet_close,
|
||||
.ndo_start_xmit = rionet_start_xmit,
|
||||
.ndo_change_mtu = eth_change_mtu,
|
||||
.ndo_change_mtu = rionet_change_mtu,
|
||||
.ndo_validate_addr = eth_validate_addr,
|
||||
.ndo_set_mac_address = eth_mac_addr,
|
||||
};
|
||||
@ -478,6 +516,7 @@ static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)
|
||||
/* Set up private area */
|
||||
rnet = netdev_priv(ndev);
|
||||
rnet->mport = mport;
|
||||
rnet->open = false;
|
||||
|
||||
/* Set the default MAC address */
|
||||
device_id = rio_local_get_device_id(mport);
|
||||
@ -489,7 +528,7 @@ static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)
|
||||
ndev->dev_addr[5] = device_id & 0xff;
|
||||
|
||||
ndev->netdev_ops = &rionet_netdev_ops;
|
||||
ndev->mtu = RIO_MAX_MSG_SIZE - 14;
|
||||
ndev->mtu = RIONET_MAX_MTU;
|
||||
ndev->features = NETIF_F_LLTX;
|
||||
SET_NETDEV_DEV(ndev, &mport->dev);
|
||||
ndev->ethtool_ops = &rionet_ethtool_ops;
|
||||
@ -500,8 +539,11 @@ static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)
|
||||
rnet->msg_enable = RIONET_DEFAULT_MSGLEVEL;
|
||||
|
||||
rc = register_netdev(ndev);
|
||||
if (rc != 0)
|
||||
if (rc != 0) {
|
||||
free_pages((unsigned long)nets[mport->id].active,
|
||||
get_order(rionet_active_bytes));
|
||||
goto out;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "%s: %s %s Version %s, MAC %pM, %s\n",
|
||||
ndev->name,
|
||||
@ -515,8 +557,6 @@ static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev)
|
||||
return rc;
|
||||
}
|
||||
|
||||
static unsigned long net_table[RIONET_MAX_NETS/sizeof(unsigned long) + 1];
|
||||
|
||||
static int rionet_add_dev(struct device *dev, struct subsys_interface *sif)
|
||||
{
|
||||
int rc = -ENODEV;
|
||||
@ -525,19 +565,16 @@ static int rionet_add_dev(struct device *dev, struct subsys_interface *sif)
|
||||
struct net_device *ndev = NULL;
|
||||
struct rio_dev *rdev = to_rio_dev(dev);
|
||||
unsigned char netid = rdev->net->hport->id;
|
||||
int oldnet;
|
||||
|
||||
if (netid >= RIONET_MAX_NETS)
|
||||
return rc;
|
||||
|
||||
oldnet = test_and_set_bit(netid, net_table);
|
||||
|
||||
/*
|
||||
* If first time through this net, make sure local device is rionet
|
||||
* capable and setup netdev (this step will be skipped in later probes
|
||||
* on the same net).
|
||||
*/
|
||||
if (!oldnet) {
|
||||
if (!nets[netid].ndev) {
|
||||
rio_local_read_config_32(rdev->net->hport, RIO_SRC_OPS_CAR,
|
||||
&lsrc_ops);
|
||||
rio_local_read_config_32(rdev->net->hport, RIO_DST_OPS_CAR,
|
||||
@ -555,30 +592,56 @@ static int rionet_add_dev(struct device *dev, struct subsys_interface *sif)
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
nets[netid].ndev = ndev;
|
||||
|
||||
rc = rionet_setup_netdev(rdev->net->hport, ndev);
|
||||
if (rc) {
|
||||
printk(KERN_ERR "%s: failed to setup netdev (rc=%d)\n",
|
||||
DRV_NAME, rc);
|
||||
free_netdev(ndev);
|
||||
goto out;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&nets[netid].peers);
|
||||
spin_lock_init(&nets[netid].lock);
|
||||
nets[netid].nact = 0;
|
||||
} else if (nets[netid].ndev == NULL)
|
||||
goto out;
|
||||
nets[netid].ndev = ndev;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the remote device has mailbox/doorbell capabilities,
|
||||
* add it to the peer list.
|
||||
*/
|
||||
if (dev_rionet_capable(rdev)) {
|
||||
if (!(peer = kmalloc(sizeof(struct rionet_peer), GFP_KERNEL))) {
|
||||
struct rionet_private *rnet;
|
||||
unsigned long flags;
|
||||
|
||||
rnet = netdev_priv(nets[netid].ndev);
|
||||
|
||||
peer = kzalloc(sizeof(*peer), GFP_KERNEL);
|
||||
if (!peer) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
peer->rdev = rdev;
|
||||
peer->res = rio_request_outb_dbell(peer->rdev,
|
||||
RIONET_DOORBELL_JOIN,
|
||||
RIONET_DOORBELL_LEAVE);
|
||||
if (!peer->res) {
|
||||
pr_err("%s: error requesting doorbells\n", DRV_NAME);
|
||||
kfree(peer);
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&nets[netid].lock, flags);
|
||||
list_add_tail(&peer->node, &nets[netid].peers);
|
||||
spin_unlock_irqrestore(&nets[netid].lock, flags);
|
||||
pr_debug("%s: %s add peer %s\n",
|
||||
DRV_NAME, __func__, rio_name(rdev));
|
||||
|
||||
/* If netdev is already opened, send join request to new peer */
|
||||
if (rnet->open)
|
||||
rio_send_doorbell(peer->rdev, RIONET_DOORBELL_JOIN);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -586,6 +649,61 @@ static int rionet_add_dev(struct device *dev, struct subsys_interface *sif)
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int rionet_shutdown(struct notifier_block *nb, unsigned long code,
|
||||
void *unused)
|
||||
{
|
||||
struct rionet_peer *peer;
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
pr_debug("%s: %s\n", DRV_NAME, __func__);
|
||||
|
||||
for (i = 0; i < RIONET_MAX_NETS; i++) {
|
||||
if (!nets[i].ndev)
|
||||
continue;
|
||||
|
||||
spin_lock_irqsave(&nets[i].lock, flags);
|
||||
list_for_each_entry(peer, &nets[i].peers, node) {
|
||||
if (nets[i].active[peer->rdev->destid]) {
|
||||
rio_send_doorbell(peer->rdev,
|
||||
RIONET_DOORBELL_LEAVE);
|
||||
nets[i].active[peer->rdev->destid] = NULL;
|
||||
}
|
||||
}
|
||||
spin_unlock_irqrestore(&nets[i].lock, flags);
|
||||
}
|
||||
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
static void rionet_remove_mport(struct device *dev,
|
||||
struct class_interface *class_intf)
|
||||
{
|
||||
struct rio_mport *mport = to_rio_mport(dev);
|
||||
struct net_device *ndev;
|
||||
int id = mport->id;
|
||||
|
||||
pr_debug("%s %s\n", __func__, mport->name);
|
||||
|
||||
WARN(nets[id].nact, "%s called when connected to %d peers\n",
|
||||
__func__, nets[id].nact);
|
||||
WARN(!nets[id].ndev, "%s called for mport without NDEV\n",
|
||||
__func__);
|
||||
|
||||
if (nets[id].ndev) {
|
||||
ndev = nets[id].ndev;
|
||||
netif_stop_queue(ndev);
|
||||
unregister_netdev(ndev);
|
||||
|
||||
free_pages((unsigned long)nets[id].active,
|
||||
get_order(sizeof(void *) *
|
||||
RIO_MAX_ROUTE_ENTRIES(mport->sys_size)));
|
||||
nets[id].active = NULL;
|
||||
free_netdev(ndev);
|
||||
nets[id].ndev = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef MODULE
|
||||
static struct rio_device_id rionet_id_table[] = {
|
||||
{RIO_DEVICE(RIO_ANY_ID, RIO_ANY_ID)},
|
||||
@ -602,40 +720,43 @@ static struct subsys_interface rionet_interface = {
|
||||
.remove_dev = rionet_remove_dev,
|
||||
};
|
||||
|
||||
static struct notifier_block rionet_notifier = {
|
||||
.notifier_call = rionet_shutdown,
|
||||
};
|
||||
|
||||
/* the rio_mport_interface is used to handle local mport devices */
|
||||
static struct class_interface rio_mport_interface __refdata = {
|
||||
.class = &rio_mport_class,
|
||||
.add_dev = NULL,
|
||||
.remove_dev = rionet_remove_mport,
|
||||
};
|
||||
|
||||
static int __init rionet_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = register_reboot_notifier(&rionet_notifier);
|
||||
if (ret) {
|
||||
pr_err("%s: failed to register reboot notifier (err=%d)\n",
|
||||
DRV_NAME, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = class_interface_register(&rio_mport_interface);
|
||||
if (ret) {
|
||||
pr_err("%s: class_interface_register error: %d\n",
|
||||
DRV_NAME, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return subsys_interface_register(&rionet_interface);
|
||||
}
|
||||
|
||||
static void __exit rionet_exit(void)
|
||||
{
|
||||
struct rionet_private *rnet;
|
||||
struct net_device *ndev;
|
||||
struct rionet_peer *peer, *tmp;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < RIONET_MAX_NETS; i++) {
|
||||
if (nets[i].ndev != NULL) {
|
||||
ndev = nets[i].ndev;
|
||||
rnet = netdev_priv(ndev);
|
||||
unregister_netdev(ndev);
|
||||
|
||||
list_for_each_entry_safe(peer,
|
||||
tmp, &nets[i].peers, node) {
|
||||
list_del(&peer->node);
|
||||
kfree(peer);
|
||||
}
|
||||
|
||||
free_pages((unsigned long)nets[i].active,
|
||||
get_order(sizeof(void *) *
|
||||
RIO_MAX_ROUTE_ENTRIES(rnet->mport->sys_size)));
|
||||
nets[i].active = NULL;
|
||||
|
||||
free_netdev(ndev);
|
||||
}
|
||||
}
|
||||
|
||||
unregister_reboot_notifier(&rionet_notifier);
|
||||
subsys_interface_unregister(&rionet_interface);
|
||||
class_interface_unregister(&rio_mport_interface);
|
||||
}
|
||||
|
||||
late_initcall(rionet_init);
|
||||
|
@ -67,6 +67,14 @@ config RAPIDIO_ENUM_BASIC
|
||||
|
||||
endchoice
|
||||
|
||||
config RAPIDIO_MPORT_CDEV
|
||||
tristate "RapidIO /dev mport device driver"
|
||||
depends on RAPIDIO
|
||||
help
|
||||
This option includes generic RapidIO mport device driver which
|
||||
allows to user space applications to perform RapidIO-specific
|
||||
operations through selected RapidIO mport.
|
||||
|
||||
menu "RapidIO Switch drivers"
|
||||
depends on RAPIDIO
|
||||
|
||||
|
@ -5,3 +5,4 @@
|
||||
obj-$(CONFIG_RAPIDIO_TSI721) += tsi721_mport.o
|
||||
tsi721_mport-y := tsi721.o
|
||||
tsi721_mport-$(CONFIG_RAPIDIO_DMA_ENGINE) += tsi721_dma.o
|
||||
obj-$(CONFIG_RAPIDIO_MPORT_CDEV) += rio_mport_cdev.o
|
||||
|
2720
drivers/rapidio/devices/rio_mport_cdev.c
Normal file
2720
drivers/rapidio/devices/rio_mport_cdev.c
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -21,6 +21,46 @@
|
||||
#ifndef __TSI721_H
|
||||
#define __TSI721_H
|
||||
|
||||
/* Debug output filtering masks */
|
||||
enum {
|
||||
DBG_NONE = 0,
|
||||
DBG_INIT = BIT(0), /* driver init */
|
||||
DBG_EXIT = BIT(1), /* driver exit */
|
||||
DBG_MPORT = BIT(2), /* mport add/remove */
|
||||
DBG_MAINT = BIT(3), /* maintenance ops messages */
|
||||
DBG_DMA = BIT(4), /* DMA transfer messages */
|
||||
DBG_DMAV = BIT(5), /* verbose DMA transfer messages */
|
||||
DBG_IBW = BIT(6), /* inbound window */
|
||||
DBG_EVENT = BIT(7), /* event handling messages */
|
||||
DBG_OBW = BIT(8), /* outbound window messages */
|
||||
DBG_DBELL = BIT(9), /* doorbell messages */
|
||||
DBG_OMSG = BIT(10), /* doorbell messages */
|
||||
DBG_IMSG = BIT(11), /* doorbell messages */
|
||||
DBG_ALL = ~0,
|
||||
};
|
||||
|
||||
#ifdef DEBUG
|
||||
extern u32 dbg_level;
|
||||
|
||||
#define tsi_debug(level, dev, fmt, arg...) \
|
||||
do { \
|
||||
if (DBG_##level & dbg_level) \
|
||||
dev_dbg(dev, "%s: " fmt "\n", __func__, ##arg); \
|
||||
} while (0)
|
||||
#else
|
||||
#define tsi_debug(level, dev, fmt, arg...) \
|
||||
no_printk(KERN_DEBUG "%s: " fmt "\n", __func__, ##arg)
|
||||
#endif
|
||||
|
||||
#define tsi_info(dev, fmt, arg...) \
|
||||
dev_info(dev, "%s: " fmt "\n", __func__, ##arg)
|
||||
|
||||
#define tsi_warn(dev, fmt, arg...) \
|
||||
dev_warn(dev, "%s: WARNING " fmt "\n", __func__, ##arg)
|
||||
|
||||
#define tsi_err(dev, fmt, arg...) \
|
||||
dev_err(dev, "%s: ERROR " fmt "\n", __func__, ##arg)
|
||||
|
||||
#define DRV_NAME "tsi721"
|
||||
|
||||
#define DEFAULT_HOPCOUNT 0xff
|
||||
@ -674,7 +714,7 @@ struct tsi721_bdma_chan {
|
||||
struct dma_chan dchan;
|
||||
struct tsi721_tx_desc *tx_desc;
|
||||
spinlock_t lock;
|
||||
struct list_head active_list;
|
||||
struct tsi721_tx_desc *active_tx;
|
||||
struct list_head queue;
|
||||
struct list_head free_list;
|
||||
struct tasklet_struct tasklet;
|
||||
@ -808,9 +848,38 @@ struct msix_irq {
|
||||
};
|
||||
#endif /* CONFIG_PCI_MSI */
|
||||
|
||||
struct tsi721_ib_win_mapping {
|
||||
struct list_head node;
|
||||
dma_addr_t lstart;
|
||||
};
|
||||
|
||||
struct tsi721_ib_win {
|
||||
u64 rstart;
|
||||
u32 size;
|
||||
dma_addr_t lstart;
|
||||
bool active;
|
||||
bool xlat;
|
||||
struct list_head mappings;
|
||||
};
|
||||
|
||||
struct tsi721_obw_bar {
|
||||
u64 base;
|
||||
u64 size;
|
||||
u64 free;
|
||||
};
|
||||
|
||||
struct tsi721_ob_win {
|
||||
u64 base;
|
||||
u32 size;
|
||||
u16 destid;
|
||||
u64 rstart;
|
||||
bool active;
|
||||
struct tsi721_obw_bar *pbar;
|
||||
};
|
||||
|
||||
struct tsi721_device {
|
||||
struct pci_dev *pdev;
|
||||
struct rio_mport *mport;
|
||||
struct rio_mport mport;
|
||||
u32 flags;
|
||||
void __iomem *regs;
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
@ -843,11 +912,25 @@ struct tsi721_device {
|
||||
/* Outbound Messaging */
|
||||
int omsg_init[TSI721_OMSG_CHNUM];
|
||||
struct tsi721_omsg_ring omsg_ring[TSI721_OMSG_CHNUM];
|
||||
|
||||
/* Inbound Mapping Windows */
|
||||
struct tsi721_ib_win ib_win[TSI721_IBWIN_NUM];
|
||||
int ibwin_cnt;
|
||||
|
||||
/* Outbound Mapping Windows */
|
||||
struct tsi721_obw_bar p2r_bar[2];
|
||||
struct tsi721_ob_win ob_win[TSI721_OBWIN_NUM];
|
||||
int obwin_cnt;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_RAPIDIO_DMA_ENGINE
|
||||
extern void tsi721_bdma_handler(struct tsi721_bdma_chan *bdma_chan);
|
||||
extern int tsi721_register_dma(struct tsi721_device *priv);
|
||||
extern void tsi721_unregister_dma(struct tsi721_device *priv);
|
||||
extern void tsi721_dma_stop_all(struct tsi721_device *priv);
|
||||
#else
|
||||
#define tsi721_dma_stop_all(priv) do {} while (0)
|
||||
#define tsi721_unregister_dma(priv) do {} while (0)
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@ -30,6 +30,7 @@
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/kfifo.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/delay.h>
|
||||
#include "../../dma/dmaengine.h"
|
||||
|
||||
@ -63,14 +64,6 @@ struct tsi721_tx_desc *to_tsi721_desc(struct dma_async_tx_descriptor *txd)
|
||||
return container_of(txd, struct tsi721_tx_desc, txd);
|
||||
}
|
||||
|
||||
static inline
|
||||
struct tsi721_tx_desc *tsi721_dma_first_active(
|
||||
struct tsi721_bdma_chan *bdma_chan)
|
||||
{
|
||||
return list_first_entry(&bdma_chan->active_list,
|
||||
struct tsi721_tx_desc, desc_node);
|
||||
}
|
||||
|
||||
static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
{
|
||||
struct tsi721_dma_desc *bd_ptr;
|
||||
@ -83,7 +76,7 @@ static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
struct tsi721_device *priv = to_tsi721(bdma_chan->dchan.device);
|
||||
#endif
|
||||
|
||||
dev_dbg(dev, "Init Block DMA Engine, CH%d\n", bdma_chan->id);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device, "DMAC%d", bdma_chan->id);
|
||||
|
||||
/*
|
||||
* Allocate space for DMA descriptors
|
||||
@ -91,7 +84,7 @@ static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
*/
|
||||
bd_ptr = dma_zalloc_coherent(dev,
|
||||
(bd_num + 1) * sizeof(struct tsi721_dma_desc),
|
||||
&bd_phys, GFP_KERNEL);
|
||||
&bd_phys, GFP_ATOMIC);
|
||||
if (!bd_ptr)
|
||||
return -ENOMEM;
|
||||
|
||||
@ -99,8 +92,9 @@ static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
bdma_chan->bd_phys = bd_phys;
|
||||
bdma_chan->bd_base = bd_ptr;
|
||||
|
||||
dev_dbg(dev, "DMA descriptors @ %p (phys = %llx)\n",
|
||||
bd_ptr, (unsigned long long)bd_phys);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device,
|
||||
"DMAC%d descriptors @ %p (phys = %pad)",
|
||||
bdma_chan->id, bd_ptr, &bd_phys);
|
||||
|
||||
/* Allocate space for descriptor status FIFO */
|
||||
sts_size = ((bd_num + 1) >= TSI721_DMA_MINSTSSZ) ?
|
||||
@ -108,7 +102,7 @@ static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
sts_size = roundup_pow_of_two(sts_size);
|
||||
sts_ptr = dma_zalloc_coherent(dev,
|
||||
sts_size * sizeof(struct tsi721_dma_sts),
|
||||
&sts_phys, GFP_KERNEL);
|
||||
&sts_phys, GFP_ATOMIC);
|
||||
if (!sts_ptr) {
|
||||
/* Free space allocated for DMA descriptors */
|
||||
dma_free_coherent(dev,
|
||||
@ -122,9 +116,9 @@ static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
bdma_chan->sts_base = sts_ptr;
|
||||
bdma_chan->sts_size = sts_size;
|
||||
|
||||
dev_dbg(dev,
|
||||
"desc status FIFO @ %p (phys = %llx) size=0x%x\n",
|
||||
sts_ptr, (unsigned long long)sts_phys, sts_size);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device,
|
||||
"DMAC%d desc status FIFO @ %p (phys = %pad) size=0x%x",
|
||||
bdma_chan->id, sts_ptr, &sts_phys, sts_size);
|
||||
|
||||
/* Initialize DMA descriptors ring using added link descriptor */
|
||||
bd_ptr[bd_num].type_id = cpu_to_le32(DTYPE3 << 29);
|
||||
@ -163,8 +157,9 @@ static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
priv->msix[idx].irq_name, (void *)bdma_chan);
|
||||
|
||||
if (rc) {
|
||||
dev_dbg(dev, "Unable to get MSI-X for BDMA%d-DONE\n",
|
||||
bdma_chan->id);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device,
|
||||
"Unable to get MSI-X for DMAC%d-DONE",
|
||||
bdma_chan->id);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
@ -174,8 +169,9 @@ static int tsi721_bdma_ch_init(struct tsi721_bdma_chan *bdma_chan, int bd_num)
|
||||
priv->msix[idx].irq_name, (void *)bdma_chan);
|
||||
|
||||
if (rc) {
|
||||
dev_dbg(dev, "Unable to get MSI-X for BDMA%d-INT\n",
|
||||
bdma_chan->id);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device,
|
||||
"Unable to get MSI-X for DMAC%d-INT",
|
||||
bdma_chan->id);
|
||||
free_irq(
|
||||
priv->msix[TSI721_VECT_DMA0_DONE +
|
||||
bdma_chan->id].vector,
|
||||
@ -286,7 +282,7 @@ void tsi721_bdma_handler(struct tsi721_bdma_chan *bdma_chan)
|
||||
/* Disable BDMA channel interrupts */
|
||||
iowrite32(0, bdma_chan->regs + TSI721_DMAC_INTE);
|
||||
if (bdma_chan->active)
|
||||
tasklet_schedule(&bdma_chan->tasklet);
|
||||
tasklet_hi_schedule(&bdma_chan->tasklet);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
@ -301,7 +297,8 @@ static irqreturn_t tsi721_bdma_msix(int irq, void *ptr)
|
||||
{
|
||||
struct tsi721_bdma_chan *bdma_chan = ptr;
|
||||
|
||||
tsi721_bdma_handler(bdma_chan);
|
||||
if (bdma_chan->active)
|
||||
tasklet_hi_schedule(&bdma_chan->tasklet);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
#endif /* CONFIG_PCI_MSI */
|
||||
@ -310,20 +307,22 @@ static irqreturn_t tsi721_bdma_msix(int irq, void *ptr)
|
||||
static void tsi721_start_dma(struct tsi721_bdma_chan *bdma_chan)
|
||||
{
|
||||
if (!tsi721_dma_is_idle(bdma_chan)) {
|
||||
dev_err(bdma_chan->dchan.device->dev,
|
||||
"BUG: Attempt to start non-idle channel\n");
|
||||
tsi_err(&bdma_chan->dchan.dev->device,
|
||||
"DMAC%d Attempt to start non-idle channel",
|
||||
bdma_chan->id);
|
||||
return;
|
||||
}
|
||||
|
||||
if (bdma_chan->wr_count == bdma_chan->wr_count_next) {
|
||||
dev_err(bdma_chan->dchan.device->dev,
|
||||
"BUG: Attempt to start DMA with no BDs ready\n");
|
||||
tsi_err(&bdma_chan->dchan.dev->device,
|
||||
"DMAC%d Attempt to start DMA with no BDs ready %d",
|
||||
bdma_chan->id, task_pid_nr(current));
|
||||
return;
|
||||
}
|
||||
|
||||
dev_dbg(bdma_chan->dchan.device->dev,
|
||||
"%s: chan_%d (wrc=%d)\n", __func__, bdma_chan->id,
|
||||
bdma_chan->wr_count_next);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device, "DMAC%d (wrc=%d) %d",
|
||||
bdma_chan->id, bdma_chan->wr_count_next,
|
||||
task_pid_nr(current));
|
||||
|
||||
iowrite32(bdma_chan->wr_count_next,
|
||||
bdma_chan->regs + TSI721_DMAC_DWRCNT);
|
||||
@ -425,10 +424,11 @@ static int tsi721_submit_sg(struct tsi721_tx_desc *desc)
|
||||
struct tsi721_dma_desc *bd_ptr = NULL;
|
||||
u32 idx, rd_idx;
|
||||
u32 add_count = 0;
|
||||
struct device *ch_dev = &dchan->dev->device;
|
||||
|
||||
if (!tsi721_dma_is_idle(bdma_chan)) {
|
||||
dev_err(bdma_chan->dchan.device->dev,
|
||||
"BUG: Attempt to use non-idle channel\n");
|
||||
tsi_err(ch_dev, "DMAC%d ERR: Attempt to use non-idle channel",
|
||||
bdma_chan->id);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@ -439,7 +439,7 @@ static int tsi721_submit_sg(struct tsi721_tx_desc *desc)
|
||||
rio_addr = desc->rio_addr;
|
||||
next_addr = -1;
|
||||
bcount = 0;
|
||||
sys_size = dma_to_mport(bdma_chan->dchan.device)->sys_size;
|
||||
sys_size = dma_to_mport(dchan->device)->sys_size;
|
||||
|
||||
rd_idx = ioread32(bdma_chan->regs + TSI721_DMAC_DRDCNT);
|
||||
rd_idx %= (bdma_chan->bd_num + 1);
|
||||
@ -451,18 +451,18 @@ static int tsi721_submit_sg(struct tsi721_tx_desc *desc)
|
||||
add_count++;
|
||||
}
|
||||
|
||||
dev_dbg(dchan->device->dev, "%s: BD ring status: rdi=%d wri=%d\n",
|
||||
__func__, rd_idx, idx);
|
||||
tsi_debug(DMA, ch_dev, "DMAC%d BD ring status: rdi=%d wri=%d",
|
||||
bdma_chan->id, rd_idx, idx);
|
||||
|
||||
for_each_sg(desc->sg, sg, desc->sg_len, i) {
|
||||
|
||||
dev_dbg(dchan->device->dev, "sg%d/%d addr: 0x%llx len: %d\n",
|
||||
i, desc->sg_len,
|
||||
tsi_debug(DMAV, ch_dev, "DMAC%d sg%d/%d addr: 0x%llx len: %d",
|
||||
bdma_chan->id, i, desc->sg_len,
|
||||
(unsigned long long)sg_dma_address(sg), sg_dma_len(sg));
|
||||
|
||||
if (sg_dma_len(sg) > TSI721_BDMA_MAX_BCOUNT) {
|
||||
dev_err(dchan->device->dev,
|
||||
"%s: SG entry %d is too large\n", __func__, i);
|
||||
tsi_err(ch_dev, "DMAC%d SG entry %d is too large",
|
||||
bdma_chan->id, i);
|
||||
err = -EINVAL;
|
||||
break;
|
||||
}
|
||||
@ -479,17 +479,16 @@ static int tsi721_submit_sg(struct tsi721_tx_desc *desc)
|
||||
} else if (next_addr != -1) {
|
||||
/* Finalize descriptor using total byte count value */
|
||||
tsi721_desc_fill_end(bd_ptr, bcount, 0);
|
||||
dev_dbg(dchan->device->dev,
|
||||
"%s: prev desc final len: %d\n",
|
||||
__func__, bcount);
|
||||
tsi_debug(DMAV, ch_dev, "DMAC%d prev desc final len: %d",
|
||||
bdma_chan->id, bcount);
|
||||
}
|
||||
|
||||
desc->rio_addr = rio_addr;
|
||||
|
||||
if (i && idx == rd_idx) {
|
||||
dev_dbg(dchan->device->dev,
|
||||
"%s: HW descriptor ring is full @ %d\n",
|
||||
__func__, i);
|
||||
tsi_debug(DMAV, ch_dev,
|
||||
"DMAC%d HW descriptor ring is full @ %d",
|
||||
bdma_chan->id, i);
|
||||
desc->sg = sg;
|
||||
desc->sg_len -= i;
|
||||
break;
|
||||
@ -498,13 +497,12 @@ static int tsi721_submit_sg(struct tsi721_tx_desc *desc)
|
||||
bd_ptr = &((struct tsi721_dma_desc *)bdma_chan->bd_base)[idx];
|
||||
err = tsi721_desc_fill_init(desc, bd_ptr, sg, sys_size);
|
||||
if (err) {
|
||||
dev_err(dchan->device->dev,
|
||||
"Failed to build desc: err=%d\n", err);
|
||||
tsi_err(ch_dev, "Failed to build desc: err=%d", err);
|
||||
break;
|
||||
}
|
||||
|
||||
dev_dbg(dchan->device->dev, "bd_ptr = %p did=%d raddr=0x%llx\n",
|
||||
bd_ptr, desc->destid, desc->rio_addr);
|
||||
tsi_debug(DMAV, ch_dev, "DMAC%d bd_ptr = %p did=%d raddr=0x%llx",
|
||||
bdma_chan->id, bd_ptr, desc->destid, desc->rio_addr);
|
||||
|
||||
next_addr = sg_dma_address(sg);
|
||||
bcount = sg_dma_len(sg);
|
||||
@ -519,8 +517,9 @@ static int tsi721_submit_sg(struct tsi721_tx_desc *desc)
|
||||
entry_done:
|
||||
if (sg_is_last(sg)) {
|
||||
tsi721_desc_fill_end(bd_ptr, bcount, 0);
|
||||
dev_dbg(dchan->device->dev, "%s: last desc final len: %d\n",
|
||||
__func__, bcount);
|
||||
tsi_debug(DMAV, ch_dev,
|
||||
"DMAC%d last desc final len: %d",
|
||||
bdma_chan->id, bcount);
|
||||
desc->sg_len = 0;
|
||||
} else {
|
||||
rio_addr += sg_dma_len(sg);
|
||||
@ -534,35 +533,43 @@ static int tsi721_submit_sg(struct tsi721_tx_desc *desc)
|
||||
return err;
|
||||
}
|
||||
|
||||
static void tsi721_advance_work(struct tsi721_bdma_chan *bdma_chan)
|
||||
static void tsi721_advance_work(struct tsi721_bdma_chan *bdma_chan,
|
||||
struct tsi721_tx_desc *desc)
|
||||
{
|
||||
struct tsi721_tx_desc *desc;
|
||||
int err;
|
||||
|
||||
dev_dbg(bdma_chan->dchan.device->dev, "%s: Enter\n", __func__);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device, "DMAC%d", bdma_chan->id);
|
||||
|
||||
if (!tsi721_dma_is_idle(bdma_chan))
|
||||
return;
|
||||
|
||||
/*
|
||||
* If there are any new transactions in the queue add them
|
||||
* into the processing list
|
||||
*/
|
||||
if (!list_empty(&bdma_chan->queue))
|
||||
list_splice_init(&bdma_chan->queue, &bdma_chan->active_list);
|
||||
* If there is no data transfer in progress, fetch new descriptor from
|
||||
* the pending queue.
|
||||
*/
|
||||
|
||||
/* Start new transaction (if available) */
|
||||
if (!list_empty(&bdma_chan->active_list)) {
|
||||
desc = tsi721_dma_first_active(bdma_chan);
|
||||
if (desc == NULL && bdma_chan->active_tx == NULL &&
|
||||
!list_empty(&bdma_chan->queue)) {
|
||||
desc = list_first_entry(&bdma_chan->queue,
|
||||
struct tsi721_tx_desc, desc_node);
|
||||
list_del_init((&desc->desc_node));
|
||||
bdma_chan->active_tx = desc;
|
||||
}
|
||||
|
||||
if (desc) {
|
||||
err = tsi721_submit_sg(desc);
|
||||
if (!err)
|
||||
tsi721_start_dma(bdma_chan);
|
||||
else {
|
||||
tsi721_dma_tx_err(bdma_chan, desc);
|
||||
dev_dbg(bdma_chan->dchan.device->dev,
|
||||
"ERR: tsi721_submit_sg failed with err=%d\n",
|
||||
err);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device,
|
||||
"DMAC%d ERR: tsi721_submit_sg failed with err=%d",
|
||||
bdma_chan->id, err);
|
||||
}
|
||||
}
|
||||
|
||||
dev_dbg(bdma_chan->dchan.device->dev, "%s: Exit\n", __func__);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device, "DMAC%d Exit",
|
||||
bdma_chan->id);
|
||||
}
|
||||
|
||||
static void tsi721_dma_tasklet(unsigned long data)
|
||||
@ -571,22 +578,84 @@ static void tsi721_dma_tasklet(unsigned long data)
|
||||
u32 dmac_int, dmac_sts;
|
||||
|
||||
dmac_int = ioread32(bdma_chan->regs + TSI721_DMAC_INT);
|
||||
dev_dbg(bdma_chan->dchan.device->dev, "%s: DMAC%d_INT = 0x%x\n",
|
||||
__func__, bdma_chan->id, dmac_int);
|
||||
tsi_debug(DMA, &bdma_chan->dchan.dev->device, "DMAC%d_INT = 0x%x",
|
||||
bdma_chan->id, dmac_int);
|
||||
/* Clear channel interrupts */
|
||||
iowrite32(dmac_int, bdma_chan->regs + TSI721_DMAC_INT);
|
||||
|
||||
if (dmac_int & TSI721_DMAC_INT_ERR) {
|
||||
int i = 10000;
|
||||
struct tsi721_tx_desc *desc;
|
||||
|
||||
desc = bdma_chan->active_tx;
|
||||
dmac_sts = ioread32(bdma_chan->regs + TSI721_DMAC_STS);
|
||||
dev_err(bdma_chan->dchan.device->dev,
|
||||
"%s: DMA ERROR - DMAC%d_STS = 0x%x\n",
|
||||
__func__, bdma_chan->id, dmac_sts);
|
||||
tsi_err(&bdma_chan->dchan.dev->device,
|
||||
"DMAC%d_STS = 0x%x did=%d raddr=0x%llx",
|
||||
bdma_chan->id, dmac_sts, desc->destid, desc->rio_addr);
|
||||
|
||||
/* Re-initialize DMA channel if possible */
|
||||
|
||||
if ((dmac_sts & TSI721_DMAC_STS_ABORT) == 0)
|
||||
goto err_out;
|
||||
|
||||
tsi721_clr_stat(bdma_chan);
|
||||
|
||||
spin_lock(&bdma_chan->lock);
|
||||
|
||||
/* Put DMA channel into init state */
|
||||
iowrite32(TSI721_DMAC_CTL_INIT,
|
||||
bdma_chan->regs + TSI721_DMAC_CTL);
|
||||
do {
|
||||
udelay(1);
|
||||
dmac_sts = ioread32(bdma_chan->regs + TSI721_DMAC_STS);
|
||||
i--;
|
||||
} while ((dmac_sts & TSI721_DMAC_STS_ABORT) && i);
|
||||
|
||||
if (dmac_sts & TSI721_DMAC_STS_ABORT) {
|
||||
tsi_err(&bdma_chan->dchan.dev->device,
|
||||
"Failed to re-initiate DMAC%d", bdma_chan->id);
|
||||
spin_unlock(&bdma_chan->lock);
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
/* Setup DMA descriptor pointers */
|
||||
iowrite32(((u64)bdma_chan->bd_phys >> 32),
|
||||
bdma_chan->regs + TSI721_DMAC_DPTRH);
|
||||
iowrite32(((u64)bdma_chan->bd_phys & TSI721_DMAC_DPTRL_MASK),
|
||||
bdma_chan->regs + TSI721_DMAC_DPTRL);
|
||||
|
||||
/* Setup descriptor status FIFO */
|
||||
iowrite32(((u64)bdma_chan->sts_phys >> 32),
|
||||
bdma_chan->regs + TSI721_DMAC_DSBH);
|
||||
iowrite32(((u64)bdma_chan->sts_phys & TSI721_DMAC_DSBL_MASK),
|
||||
bdma_chan->regs + TSI721_DMAC_DSBL);
|
||||
iowrite32(TSI721_DMAC_DSSZ_SIZE(bdma_chan->sts_size),
|
||||
bdma_chan->regs + TSI721_DMAC_DSSZ);
|
||||
|
||||
/* Clear interrupt bits */
|
||||
iowrite32(TSI721_DMAC_INT_ALL,
|
||||
bdma_chan->regs + TSI721_DMAC_INT);
|
||||
|
||||
ioread32(bdma_chan->regs + TSI721_DMAC_INT);
|
||||
|
||||
bdma_chan->wr_count = bdma_chan->wr_count_next = 0;
|
||||
bdma_chan->sts_rdptr = 0;
|
||||
udelay(10);
|
||||
|
||||
desc = bdma_chan->active_tx;
|
||||
desc->status = DMA_ERROR;
|
||||
dma_cookie_complete(&desc->txd);
|
||||
list_add(&desc->desc_node, &bdma_chan->free_list);
|
||||
bdma_chan->active_tx = NULL;
|
||||
if (bdma_chan->active)
|
||||
tsi721_advance_work(bdma_chan, NULL);
|
||||
spin_unlock(&bdma_chan->lock);
|
||||
}
|
||||
|
||||
if (dmac_int & TSI721_DMAC_INT_STFULL) {
|
||||
dev_err(bdma_chan->dchan.device->dev,
|
||||
"%s: DMAC%d descriptor status FIFO is full\n",
|
||||
__func__, bdma_chan->id);
|
||||
tsi_err(&bdma_chan->dchan.dev->device,
|
||||
"DMAC%d descriptor status FIFO is full",
|
||||
bdma_chan->id);
|
||||
}
|
||||
|
||||
if (dmac_int & (TSI721_DMAC_INT_DONE | TSI721_DMAC_INT_IOFDONE)) {
|
||||
@ -594,7 +663,7 @@ static void tsi721_dma_tasklet(unsigned long data)
|
||||
|
||||
tsi721_clr_stat(bdma_chan);
|
||||
spin_lock(&bdma_chan->lock);
|
||||
desc = tsi721_dma_first_active(bdma_chan);
|
||||
desc = bdma_chan->active_tx;
|
||||
|
||||
if (desc->sg_len == 0) {
|
||||
dma_async_tx_callback callback = NULL;
|
||||
@ -606,17 +675,21 @@ static void tsi721_dma_tasklet(unsigned long data)
|
||||
callback = desc->txd.callback;
|
||||
param = desc->txd.callback_param;
|
||||
}
|
||||
list_move(&desc->desc_node, &bdma_chan->free_list);
|
||||
list_add(&desc->desc_node, &bdma_chan->free_list);
|
||||
bdma_chan->active_tx = NULL;
|
||||
if (bdma_chan->active)
|
||||
tsi721_advance_work(bdma_chan, NULL);
|
||||
spin_unlock(&bdma_chan->lock);
|
||||
if (callback)
|
||||
callback(param);
|
||||
spin_lock(&bdma_chan->lock);
|
||||
} else {
|
||||
if (bdma_chan->active)
|
||||
tsi721_advance_work(bdma_chan,
|
||||
bdma_chan->active_tx);
|
||||
spin_unlock(&bdma_chan->lock);
|
||||
}
|
||||
|
||||
tsi721_advance_work(bdma_chan);
|
||||
spin_unlock(&bdma_chan->lock);
|
||||
}
|
||||
|
||||
err_out:
|
||||
/* Re-Enable BDMA channel interrupts */
|
||||
iowrite32(TSI721_DMAC_INT_ALL, bdma_chan->regs + TSI721_DMAC_INTE);
|
||||
}
|
||||
@ -629,8 +702,9 @@ static dma_cookie_t tsi721_tx_submit(struct dma_async_tx_descriptor *txd)
|
||||
|
||||
/* Check if the descriptor is detached from any lists */
|
||||
if (!list_empty(&desc->desc_node)) {
|
||||
dev_err(bdma_chan->dchan.device->dev,
|
||||
"%s: wrong state of descriptor %p\n", __func__, txd);
|
||||
tsi_err(&bdma_chan->dchan.dev->device,
|
||||
"DMAC%d wrong state of descriptor %p",
|
||||
bdma_chan->id, txd);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@ -655,25 +729,25 @@ static int tsi721_alloc_chan_resources(struct dma_chan *dchan)
|
||||
struct tsi721_tx_desc *desc = NULL;
|
||||
int i;
|
||||
|
||||
dev_dbg(dchan->device->dev, "%s: for channel %d\n",
|
||||
__func__, bdma_chan->id);
|
||||
tsi_debug(DMA, &dchan->dev->device, "DMAC%d", bdma_chan->id);
|
||||
|
||||
if (bdma_chan->bd_base)
|
||||
return TSI721_DMA_TX_QUEUE_SZ;
|
||||
|
||||
/* Initialize BDMA channel */
|
||||
if (tsi721_bdma_ch_init(bdma_chan, dma_desc_per_channel)) {
|
||||
dev_err(dchan->device->dev, "Unable to initialize data DMA"
|
||||
" channel %d, aborting\n", bdma_chan->id);
|
||||
tsi_err(&dchan->dev->device, "Unable to initialize DMAC%d",
|
||||
bdma_chan->id);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
/* Allocate queue of transaction descriptors */
|
||||
desc = kcalloc(TSI721_DMA_TX_QUEUE_SZ, sizeof(struct tsi721_tx_desc),
|
||||
GFP_KERNEL);
|
||||
GFP_ATOMIC);
|
||||
if (!desc) {
|
||||
dev_err(dchan->device->dev,
|
||||
"Failed to allocate logical descriptors\n");
|
||||
tsi_err(&dchan->dev->device,
|
||||
"DMAC%d Failed to allocate logical descriptors",
|
||||
bdma_chan->id);
|
||||
tsi721_bdma_ch_free(bdma_chan);
|
||||
return -ENOMEM;
|
||||
}
|
||||
@ -714,15 +788,11 @@ static void tsi721_free_chan_resources(struct dma_chan *dchan)
|
||||
{
|
||||
struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan);
|
||||
|
||||
dev_dbg(dchan->device->dev, "%s: for channel %d\n",
|
||||
__func__, bdma_chan->id);
|
||||
tsi_debug(DMA, &dchan->dev->device, "DMAC%d", bdma_chan->id);
|
||||
|
||||
if (bdma_chan->bd_base == NULL)
|
||||
return;
|
||||
|
||||
BUG_ON(!list_empty(&bdma_chan->active_list));
|
||||
BUG_ON(!list_empty(&bdma_chan->queue));
|
||||
|
||||
tsi721_bdma_interrupt_enable(bdma_chan, 0);
|
||||
bdma_chan->active = false;
|
||||
tsi721_sync_dma_irq(bdma_chan);
|
||||
@ -736,20 +806,26 @@ static
|
||||
enum dma_status tsi721_tx_status(struct dma_chan *dchan, dma_cookie_t cookie,
|
||||
struct dma_tx_state *txstate)
|
||||
{
|
||||
return dma_cookie_status(dchan, cookie, txstate);
|
||||
struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan);
|
||||
enum dma_status status;
|
||||
|
||||
spin_lock_bh(&bdma_chan->lock);
|
||||
status = dma_cookie_status(dchan, cookie, txstate);
|
||||
spin_unlock_bh(&bdma_chan->lock);
|
||||
return status;
|
||||
}
|
||||
|
||||
static void tsi721_issue_pending(struct dma_chan *dchan)
|
||||
{
|
||||
struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan);
|
||||
|
||||
dev_dbg(dchan->device->dev, "%s: Enter\n", __func__);
|
||||
tsi_debug(DMA, &dchan->dev->device, "DMAC%d", bdma_chan->id);
|
||||
|
||||
spin_lock_bh(&bdma_chan->lock);
|
||||
if (tsi721_dma_is_idle(bdma_chan) && bdma_chan->active) {
|
||||
spin_lock_bh(&bdma_chan->lock);
|
||||
tsi721_advance_work(bdma_chan);
|
||||
spin_unlock_bh(&bdma_chan->lock);
|
||||
tsi721_advance_work(bdma_chan, NULL);
|
||||
}
|
||||
spin_unlock_bh(&bdma_chan->lock);
|
||||
}
|
||||
|
||||
static
|
||||
@ -759,18 +835,19 @@ struct dma_async_tx_descriptor *tsi721_prep_rio_sg(struct dma_chan *dchan,
|
||||
void *tinfo)
|
||||
{
|
||||
struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan);
|
||||
struct tsi721_tx_desc *desc, *_d;
|
||||
struct tsi721_tx_desc *desc;
|
||||
struct rio_dma_ext *rext = tinfo;
|
||||
enum dma_rtype rtype;
|
||||
struct dma_async_tx_descriptor *txd = NULL;
|
||||
|
||||
if (!sgl || !sg_len) {
|
||||
dev_err(dchan->device->dev, "%s: No SG list\n", __func__);
|
||||
return NULL;
|
||||
tsi_err(&dchan->dev->device, "DMAC%d No SG list",
|
||||
bdma_chan->id);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
dev_dbg(dchan->device->dev, "%s: %s\n", __func__,
|
||||
(dir == DMA_DEV_TO_MEM)?"READ":"WRITE");
|
||||
tsi_debug(DMA, &dchan->dev->device, "DMAC%d %s", bdma_chan->id,
|
||||
(dir == DMA_DEV_TO_MEM)?"READ":"WRITE");
|
||||
|
||||
if (dir == DMA_DEV_TO_MEM)
|
||||
rtype = NREAD;
|
||||
@ -788,30 +865,36 @@ struct dma_async_tx_descriptor *tsi721_prep_rio_sg(struct dma_chan *dchan,
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
dev_err(dchan->device->dev,
|
||||
"%s: Unsupported DMA direction option\n", __func__);
|
||||
return NULL;
|
||||
tsi_err(&dchan->dev->device,
|
||||
"DMAC%d Unsupported DMA direction option",
|
||||
bdma_chan->id);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
spin_lock_bh(&bdma_chan->lock);
|
||||
|
||||
list_for_each_entry_safe(desc, _d, &bdma_chan->free_list, desc_node) {
|
||||
if (async_tx_test_ack(&desc->txd)) {
|
||||
list_del_init(&desc->desc_node);
|
||||
desc->destid = rext->destid;
|
||||
desc->rio_addr = rext->rio_addr;
|
||||
desc->rio_addr_u = 0;
|
||||
desc->rtype = rtype;
|
||||
desc->sg_len = sg_len;
|
||||
desc->sg = sgl;
|
||||
txd = &desc->txd;
|
||||
txd->flags = flags;
|
||||
break;
|
||||
}
|
||||
if (!list_empty(&bdma_chan->free_list)) {
|
||||
desc = list_first_entry(&bdma_chan->free_list,
|
||||
struct tsi721_tx_desc, desc_node);
|
||||
list_del_init(&desc->desc_node);
|
||||
desc->destid = rext->destid;
|
||||
desc->rio_addr = rext->rio_addr;
|
||||
desc->rio_addr_u = 0;
|
||||
desc->rtype = rtype;
|
||||
desc->sg_len = sg_len;
|
||||
desc->sg = sgl;
|
||||
txd = &desc->txd;
|
||||
txd->flags = flags;
|
||||
}
|
||||
|
||||
spin_unlock_bh(&bdma_chan->lock);
|
||||
|
||||
if (!txd) {
|
||||
tsi_debug(DMA, &dchan->dev->device,
|
||||
"DMAC%d free TXD is not available", bdma_chan->id);
|
||||
return ERR_PTR(-EBUSY);
|
||||
}
|
||||
|
||||
return txd;
|
||||
}
|
||||
|
||||
@ -819,16 +902,18 @@ static int tsi721_terminate_all(struct dma_chan *dchan)
|
||||
{
|
||||
struct tsi721_bdma_chan *bdma_chan = to_tsi721_chan(dchan);
|
||||
struct tsi721_tx_desc *desc, *_d;
|
||||
u32 dmac_int;
|
||||
LIST_HEAD(list);
|
||||
|
||||
dev_dbg(dchan->device->dev, "%s: Entry\n", __func__);
|
||||
tsi_debug(DMA, &dchan->dev->device, "DMAC%d", bdma_chan->id);
|
||||
|
||||
spin_lock_bh(&bdma_chan->lock);
|
||||
|
||||
bdma_chan->active = false;
|
||||
|
||||
if (!tsi721_dma_is_idle(bdma_chan)) {
|
||||
while (!tsi721_dma_is_idle(bdma_chan)) {
|
||||
|
||||
udelay(5);
|
||||
#if (0)
|
||||
/* make sure to stop the transfer */
|
||||
iowrite32(TSI721_DMAC_CTL_SUSP,
|
||||
bdma_chan->regs + TSI721_DMAC_CTL);
|
||||
@ -837,9 +922,11 @@ static int tsi721_terminate_all(struct dma_chan *dchan)
|
||||
do {
|
||||
dmac_int = ioread32(bdma_chan->regs + TSI721_DMAC_INT);
|
||||
} while ((dmac_int & TSI721_DMAC_INT_SUSP) == 0);
|
||||
#endif
|
||||
}
|
||||
|
||||
list_splice_init(&bdma_chan->active_list, &list);
|
||||
if (bdma_chan->active_tx)
|
||||
list_add(&bdma_chan->active_tx->desc_node, &list);
|
||||
list_splice_init(&bdma_chan->queue, &list);
|
||||
|
||||
list_for_each_entry_safe(desc, _d, &list, desc_node)
|
||||
@ -850,12 +937,42 @@ static int tsi721_terminate_all(struct dma_chan *dchan)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void tsi721_dma_stop(struct tsi721_bdma_chan *bdma_chan)
|
||||
{
|
||||
if (!bdma_chan->active)
|
||||
return;
|
||||
spin_lock_bh(&bdma_chan->lock);
|
||||
if (!tsi721_dma_is_idle(bdma_chan)) {
|
||||
int timeout = 100000;
|
||||
|
||||
/* stop the transfer in progress */
|
||||
iowrite32(TSI721_DMAC_CTL_SUSP,
|
||||
bdma_chan->regs + TSI721_DMAC_CTL);
|
||||
|
||||
/* Wait until DMA channel stops */
|
||||
while (!tsi721_dma_is_idle(bdma_chan) && --timeout)
|
||||
udelay(1);
|
||||
}
|
||||
|
||||
spin_unlock_bh(&bdma_chan->lock);
|
||||
}
|
||||
|
||||
void tsi721_dma_stop_all(struct tsi721_device *priv)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < TSI721_DMA_MAXCH; i++) {
|
||||
if (i != TSI721_DMACH_MAINT)
|
||||
tsi721_dma_stop(&priv->bdma[i]);
|
||||
}
|
||||
}
|
||||
|
||||
int tsi721_register_dma(struct tsi721_device *priv)
|
||||
{
|
||||
int i;
|
||||
int nr_channels = 0;
|
||||
int err;
|
||||
struct rio_mport *mport = priv->mport;
|
||||
struct rio_mport *mport = &priv->mport;
|
||||
|
||||
INIT_LIST_HEAD(&mport->dma.channels);
|
||||
|
||||
@ -875,7 +992,7 @@ int tsi721_register_dma(struct tsi721_device *priv)
|
||||
|
||||
spin_lock_init(&bdma_chan->lock);
|
||||
|
||||
INIT_LIST_HEAD(&bdma_chan->active_list);
|
||||
bdma_chan->active_tx = NULL;
|
||||
INIT_LIST_HEAD(&bdma_chan->queue);
|
||||
INIT_LIST_HEAD(&bdma_chan->free_list);
|
||||
|
||||
@ -901,7 +1018,33 @@ int tsi721_register_dma(struct tsi721_device *priv)
|
||||
|
||||
err = dma_async_device_register(&mport->dma);
|
||||
if (err)
|
||||
dev_err(&priv->pdev->dev, "Failed to register DMA device\n");
|
||||
tsi_err(&priv->pdev->dev, "Failed to register DMA device");
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
void tsi721_unregister_dma(struct tsi721_device *priv)
|
||||
{
|
||||
struct rio_mport *mport = &priv->mport;
|
||||
struct dma_chan *chan, *_c;
|
||||
struct tsi721_bdma_chan *bdma_chan;
|
||||
|
||||
tsi721_dma_stop_all(priv);
|
||||
dma_async_device_unregister(&mport->dma);
|
||||
|
||||
list_for_each_entry_safe(chan, _c, &mport->dma.channels,
|
||||
device_node) {
|
||||
bdma_chan = to_tsi721_chan(chan);
|
||||
if (bdma_chan->active) {
|
||||
tsi721_bdma_interrupt_enable(bdma_chan, 0);
|
||||
bdma_chan->active = false;
|
||||
tsi721_sync_dma_irq(bdma_chan);
|
||||
tasklet_kill(&bdma_chan->tasklet);
|
||||
INIT_LIST_HEAD(&bdma_chan->free_list);
|
||||
kfree(bdma_chan->tx_desc);
|
||||
tsi721_bdma_ch_free(bdma_chan);
|
||||
}
|
||||
|
||||
list_del(&chan->device_node);
|
||||
}
|
||||
}
|
||||
|
@ -131,6 +131,17 @@ static int rio_device_remove(struct device *dev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void rio_device_shutdown(struct device *dev)
|
||||
{
|
||||
struct rio_dev *rdev = to_rio_dev(dev);
|
||||
struct rio_driver *rdrv = rdev->driver;
|
||||
|
||||
dev_dbg(dev, "RIO: %s\n", __func__);
|
||||
|
||||
if (rdrv && rdrv->shutdown)
|
||||
rdrv->shutdown(rdev);
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_register_driver - register a new RIO driver
|
||||
* @rdrv: the RIO driver structure to register
|
||||
@ -229,6 +240,7 @@ struct bus_type rio_bus_type = {
|
||||
.bus_groups = rio_bus_groups,
|
||||
.probe = rio_device_probe,
|
||||
.remove = rio_device_remove,
|
||||
.shutdown = rio_device_shutdown,
|
||||
.uevent = rio_uevent,
|
||||
};
|
||||
|
||||
|
@ -39,6 +39,13 @@
|
||||
|
||||
static void rio_init_em(struct rio_dev *rdev);
|
||||
|
||||
struct rio_id_table {
|
||||
u16 start; /* logical minimal id */
|
||||
u32 max; /* max number of IDs in table */
|
||||
spinlock_t lock;
|
||||
unsigned long table[0];
|
||||
};
|
||||
|
||||
static int next_destid = 0;
|
||||
static int next_comptag = 1;
|
||||
|
||||
@ -62,7 +69,7 @@ static int rio_mport_phys_table[] = {
|
||||
static u16 rio_destid_alloc(struct rio_net *net)
|
||||
{
|
||||
int destid;
|
||||
struct rio_id_table *idtab = &net->destid_table;
|
||||
struct rio_id_table *idtab = (struct rio_id_table *)net->enum_data;
|
||||
|
||||
spin_lock(&idtab->lock);
|
||||
destid = find_first_zero_bit(idtab->table, idtab->max);
|
||||
@ -88,7 +95,7 @@ static u16 rio_destid_alloc(struct rio_net *net)
|
||||
static int rio_destid_reserve(struct rio_net *net, u16 destid)
|
||||
{
|
||||
int oldbit;
|
||||
struct rio_id_table *idtab = &net->destid_table;
|
||||
struct rio_id_table *idtab = (struct rio_id_table *)net->enum_data;
|
||||
|
||||
destid -= idtab->start;
|
||||
spin_lock(&idtab->lock);
|
||||
@ -106,7 +113,7 @@ static int rio_destid_reserve(struct rio_net *net, u16 destid)
|
||||
*/
|
||||
static void rio_destid_free(struct rio_net *net, u16 destid)
|
||||
{
|
||||
struct rio_id_table *idtab = &net->destid_table;
|
||||
struct rio_id_table *idtab = (struct rio_id_table *)net->enum_data;
|
||||
|
||||
destid -= idtab->start;
|
||||
spin_lock(&idtab->lock);
|
||||
@ -121,7 +128,7 @@ static void rio_destid_free(struct rio_net *net, u16 destid)
|
||||
static u16 rio_destid_first(struct rio_net *net)
|
||||
{
|
||||
int destid;
|
||||
struct rio_id_table *idtab = &net->destid_table;
|
||||
struct rio_id_table *idtab = (struct rio_id_table *)net->enum_data;
|
||||
|
||||
spin_lock(&idtab->lock);
|
||||
destid = find_first_bit(idtab->table, idtab->max);
|
||||
@ -141,7 +148,7 @@ static u16 rio_destid_first(struct rio_net *net)
|
||||
static u16 rio_destid_next(struct rio_net *net, u16 from)
|
||||
{
|
||||
int destid;
|
||||
struct rio_id_table *idtab = &net->destid_table;
|
||||
struct rio_id_table *idtab = (struct rio_id_table *)net->enum_data;
|
||||
|
||||
spin_lock(&idtab->lock);
|
||||
destid = find_next_bit(idtab->table, idtab->max, from);
|
||||
@ -186,19 +193,6 @@ static void rio_set_device_id(struct rio_mport *port, u16 destid, u8 hopcount, u
|
||||
RIO_SET_DID(port->sys_size, did));
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_local_set_device_id - Set the base/extended device id for a port
|
||||
* @port: RIO master port
|
||||
* @did: Device ID value to be written
|
||||
*
|
||||
* Writes the base/extended device id from a device.
|
||||
*/
|
||||
static void rio_local_set_device_id(struct rio_mport *port, u16 did)
|
||||
{
|
||||
rio_local_write_config_32(port, RIO_DID_CSR, RIO_SET_DID(port->sys_size,
|
||||
did));
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_clear_locks- Release all host locks and signal enumeration complete
|
||||
* @net: RIO network to run on
|
||||
@ -449,9 +443,6 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
|
||||
|
||||
if (do_enum)
|
||||
rio_route_clr_table(rdev, RIO_GLOBAL_TABLE, 0);
|
||||
|
||||
list_add_tail(&rswitch->node, &net->switches);
|
||||
|
||||
} else {
|
||||
if (do_enum)
|
||||
/*Enable Input Output Port (transmitter reviever)*/
|
||||
@ -461,13 +452,9 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
|
||||
rdev->comp_tag & RIO_CTAG_UDEVID);
|
||||
}
|
||||
|
||||
rdev->dev.parent = &port->dev;
|
||||
rdev->dev.parent = &net->dev;
|
||||
rio_attach_device(rdev);
|
||||
|
||||
device_initialize(&rdev->dev);
|
||||
rdev->dev.release = rio_release_dev;
|
||||
rio_dev_get(rdev);
|
||||
|
||||
rdev->dma_mask = DMA_BIT_MASK(32);
|
||||
rdev->dev.dma_mask = &rdev->dma_mask;
|
||||
rdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
|
||||
@ -480,6 +467,8 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
|
||||
if (ret)
|
||||
goto cleanup;
|
||||
|
||||
rio_dev_get(rdev);
|
||||
|
||||
return rdev;
|
||||
|
||||
cleanup:
|
||||
@ -621,8 +610,6 @@ static int rio_enum_peer(struct rio_net *net, struct rio_mport *port,
|
||||
rdev = rio_setup_device(net, port, RIO_ANY_DESTID(port->sys_size),
|
||||
hopcount, 1);
|
||||
if (rdev) {
|
||||
/* Add device to the global and bus/net specific list. */
|
||||
list_add_tail(&rdev->net_list, &net->devices);
|
||||
rdev->prev = prev;
|
||||
if (prev && rio_is_switch(prev))
|
||||
prev->rswitch->nextdev[prev_port] = rdev;
|
||||
@ -778,8 +765,6 @@ rio_disc_peer(struct rio_net *net, struct rio_mport *port, u16 destid,
|
||||
|
||||
/* Setup new RIO device */
|
||||
if ((rdev = rio_setup_device(net, port, destid, hopcount, 0))) {
|
||||
/* Add device to the global and bus/net specific list. */
|
||||
list_add_tail(&rdev->net_list, &net->devices);
|
||||
rdev->prev = prev;
|
||||
if (prev && rio_is_switch(prev))
|
||||
prev->rswitch->nextdev[prev_port] = rdev;
|
||||
@ -864,50 +849,71 @@ static int rio_mport_is_active(struct rio_mport *port)
|
||||
return result & RIO_PORT_N_ERR_STS_PORT_OK;
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_alloc_net- Allocate and configure a new RIO network
|
||||
* @port: Master port associated with the RIO network
|
||||
* @do_enum: Enumeration/Discovery mode flag
|
||||
* @start: logical minimal start id for new net
|
||||
*
|
||||
* Allocates a RIO network structure, initializes per-network
|
||||
* list heads, and adds the associated master port to the
|
||||
* network list of associated master ports. Returns a
|
||||
* RIO network pointer on success or %NULL on failure.
|
||||
*/
|
||||
static struct rio_net *rio_alloc_net(struct rio_mport *port,
|
||||
int do_enum, u16 start)
|
||||
static void rio_scan_release_net(struct rio_net *net)
|
||||
{
|
||||
pr_debug("RIO-SCAN: %s: net_%d\n", __func__, net->id);
|
||||
kfree(net->enum_data);
|
||||
}
|
||||
|
||||
static void rio_scan_release_dev(struct device *dev)
|
||||
{
|
||||
struct rio_net *net;
|
||||
|
||||
net = kzalloc(sizeof(struct rio_net), GFP_KERNEL);
|
||||
if (net && do_enum) {
|
||||
net->destid_table.table = kcalloc(
|
||||
BITS_TO_LONGS(RIO_MAX_ROUTE_ENTRIES(port->sys_size)),
|
||||
sizeof(long),
|
||||
GFP_KERNEL);
|
||||
net = to_rio_net(dev);
|
||||
pr_debug("RIO-SCAN: %s: net_%d\n", __func__, net->id);
|
||||
kfree(net);
|
||||
}
|
||||
|
||||
if (net->destid_table.table == NULL) {
|
||||
/*
|
||||
* rio_scan_alloc_net - Allocate and configure a new RIO network
|
||||
* @mport: Master port associated with the RIO network
|
||||
* @do_enum: Enumeration/Discovery mode flag
|
||||
* @start: logical minimal start id for new net
|
||||
*
|
||||
* Allocates a new RIO network structure and initializes enumerator-specific
|
||||
* part of it (if required).
|
||||
* Returns a RIO network pointer on success or %NULL on failure.
|
||||
*/
|
||||
static struct rio_net *rio_scan_alloc_net(struct rio_mport *mport,
|
||||
int do_enum, u16 start)
|
||||
{
|
||||
struct rio_net *net;
|
||||
|
||||
net = rio_alloc_net(mport);
|
||||
|
||||
if (net && do_enum) {
|
||||
struct rio_id_table *idtab;
|
||||
size_t size;
|
||||
|
||||
size = sizeof(struct rio_id_table) +
|
||||
BITS_TO_LONGS(
|
||||
RIO_MAX_ROUTE_ENTRIES(mport->sys_size)
|
||||
) * sizeof(long);
|
||||
|
||||
idtab = kzalloc(size, GFP_KERNEL);
|
||||
|
||||
if (idtab == NULL) {
|
||||
pr_err("RIO: failed to allocate destID table\n");
|
||||
kfree(net);
|
||||
rio_free_net(net);
|
||||
net = NULL;
|
||||
} else {
|
||||
net->destid_table.start = start;
|
||||
net->destid_table.max =
|
||||
RIO_MAX_ROUTE_ENTRIES(port->sys_size);
|
||||
spin_lock_init(&net->destid_table.lock);
|
||||
net->enum_data = idtab;
|
||||
net->release = rio_scan_release_net;
|
||||
idtab->start = start;
|
||||
idtab->max = RIO_MAX_ROUTE_ENTRIES(mport->sys_size);
|
||||
spin_lock_init(&idtab->lock);
|
||||
}
|
||||
}
|
||||
|
||||
if (net) {
|
||||
INIT_LIST_HEAD(&net->node);
|
||||
INIT_LIST_HEAD(&net->devices);
|
||||
INIT_LIST_HEAD(&net->switches);
|
||||
INIT_LIST_HEAD(&net->mports);
|
||||
list_add_tail(&port->nnode, &net->mports);
|
||||
net->hport = port;
|
||||
net->id = port->id;
|
||||
net->id = mport->id;
|
||||
net->hport = mport;
|
||||
dev_set_name(&net->dev, "rnet_%d", net->id);
|
||||
net->dev.parent = &mport->dev;
|
||||
net->dev.release = rio_scan_release_dev;
|
||||
rio_add_net(net);
|
||||
}
|
||||
|
||||
return net;
|
||||
}
|
||||
|
||||
@ -967,17 +973,6 @@ static void rio_init_em(struct rio_dev *rdev)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_pw_enable - Enables/disables port-write handling by a master port
|
||||
* @port: Master port associated with port-write handling
|
||||
* @enable: 1=enable, 0=disable
|
||||
*/
|
||||
static void rio_pw_enable(struct rio_mport *port, int enable)
|
||||
{
|
||||
if (port->ops->pwenable)
|
||||
port->ops->pwenable(port, enable);
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_enum_mport- Start enumeration through a master port
|
||||
* @mport: Master port to send transactions
|
||||
@ -1016,7 +1011,7 @@ static int rio_enum_mport(struct rio_mport *mport, u32 flags)
|
||||
|
||||
/* If master port has an active link, allocate net and enum peers */
|
||||
if (rio_mport_is_active(mport)) {
|
||||
net = rio_alloc_net(mport, 1, 0);
|
||||
net = rio_scan_alloc_net(mport, 1, 0);
|
||||
if (!net) {
|
||||
printk(KERN_ERR "RIO: failed to allocate new net\n");
|
||||
rc = -ENOMEM;
|
||||
@ -1133,7 +1128,7 @@ static int rio_disc_mport(struct rio_mport *mport, u32 flags)
|
||||
enum_done:
|
||||
pr_debug("RIO: ... enumeration done\n");
|
||||
|
||||
net = rio_alloc_net(mport, 0, 0);
|
||||
net = rio_scan_alloc_net(mport, 0, 0);
|
||||
if (!net) {
|
||||
printk(KERN_ERR "RIO: Failed to allocate new net\n");
|
||||
goto bail;
|
||||
|
@ -30,6 +30,20 @@
|
||||
|
||||
#include "rio.h"
|
||||
|
||||
/*
|
||||
* struct rio_pwrite - RIO portwrite event
|
||||
* @node: Node in list of doorbell events
|
||||
* @pwcback: Doorbell event callback
|
||||
* @context: Handler specific context to pass on event
|
||||
*/
|
||||
struct rio_pwrite {
|
||||
struct list_head node;
|
||||
|
||||
int (*pwcback)(struct rio_mport *mport, void *context,
|
||||
union rio_pw_msg *msg, int step);
|
||||
void *context;
|
||||
};
|
||||
|
||||
MODULE_DESCRIPTION("RapidIO Subsystem Core");
|
||||
MODULE_AUTHOR("Matt Porter <mporter@kernel.crashing.org>");
|
||||
MODULE_AUTHOR("Alexandre Bounine <alexandre.bounine@idt.com>");
|
||||
@ -42,6 +56,7 @@ MODULE_PARM_DESC(hdid,
|
||||
"Destination ID assignment to local RapidIO controllers");
|
||||
|
||||
static LIST_HEAD(rio_devices);
|
||||
static LIST_HEAD(rio_nets);
|
||||
static DEFINE_SPINLOCK(rio_global_list_lock);
|
||||
|
||||
static LIST_HEAD(rio_mports);
|
||||
@ -67,6 +82,89 @@ u16 rio_local_get_device_id(struct rio_mport *port)
|
||||
return (RIO_GET_DID(port->sys_size, result));
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_query_mport - Query mport device attributes
|
||||
* @port: mport device to query
|
||||
* @mport_attr: mport attributes data structure
|
||||
*
|
||||
* Returns attributes of specified mport through the
|
||||
* pointer to attributes data structure.
|
||||
*/
|
||||
int rio_query_mport(struct rio_mport *port,
|
||||
struct rio_mport_attr *mport_attr)
|
||||
{
|
||||
if (!port->ops->query_mport)
|
||||
return -ENODATA;
|
||||
return port->ops->query_mport(port, mport_attr);
|
||||
}
|
||||
EXPORT_SYMBOL(rio_query_mport);
|
||||
|
||||
/**
|
||||
* rio_alloc_net- Allocate and initialize a new RIO network data structure
|
||||
* @mport: Master port associated with the RIO network
|
||||
*
|
||||
* Allocates a RIO network structure, initializes per-network
|
||||
* list heads, and adds the associated master port to the
|
||||
* network list of associated master ports. Returns a
|
||||
* RIO network pointer on success or %NULL on failure.
|
||||
*/
|
||||
struct rio_net *rio_alloc_net(struct rio_mport *mport)
|
||||
{
|
||||
struct rio_net *net;
|
||||
|
||||
net = kzalloc(sizeof(struct rio_net), GFP_KERNEL);
|
||||
if (net) {
|
||||
INIT_LIST_HEAD(&net->node);
|
||||
INIT_LIST_HEAD(&net->devices);
|
||||
INIT_LIST_HEAD(&net->switches);
|
||||
INIT_LIST_HEAD(&net->mports);
|
||||
mport->net = net;
|
||||
}
|
||||
return net;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_alloc_net);
|
||||
|
||||
int rio_add_net(struct rio_net *net)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = device_register(&net->dev);
|
||||
if (err)
|
||||
return err;
|
||||
spin_lock(&rio_global_list_lock);
|
||||
list_add_tail(&net->node, &rio_nets);
|
||||
spin_unlock(&rio_global_list_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_add_net);
|
||||
|
||||
void rio_free_net(struct rio_net *net)
|
||||
{
|
||||
spin_lock(&rio_global_list_lock);
|
||||
if (!list_empty(&net->node))
|
||||
list_del(&net->node);
|
||||
spin_unlock(&rio_global_list_lock);
|
||||
if (net->release)
|
||||
net->release(net);
|
||||
device_unregister(&net->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_free_net);
|
||||
|
||||
/**
|
||||
* rio_local_set_device_id - Set the base/extended device id for a port
|
||||
* @port: RIO master port
|
||||
* @did: Device ID value to be written
|
||||
*
|
||||
* Writes the base/extended device id from a device.
|
||||
*/
|
||||
void rio_local_set_device_id(struct rio_mport *port, u16 did)
|
||||
{
|
||||
rio_local_write_config_32(port, RIO_DID_CSR,
|
||||
RIO_SET_DID(port->sys_size, did));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_local_set_device_id);
|
||||
|
||||
/**
|
||||
* rio_add_device- Adds a RIO device to the device model
|
||||
* @rdev: RIO device
|
||||
@ -79,12 +177,19 @@ int rio_add_device(struct rio_dev *rdev)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = device_add(&rdev->dev);
|
||||
atomic_set(&rdev->state, RIO_DEVICE_RUNNING);
|
||||
err = device_register(&rdev->dev);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
spin_lock(&rio_global_list_lock);
|
||||
list_add_tail(&rdev->global_list, &rio_devices);
|
||||
if (rdev->net) {
|
||||
list_add_tail(&rdev->net_list, &rdev->net->devices);
|
||||
if (rdev->pef & RIO_PEF_SWITCH)
|
||||
list_add_tail(&rdev->rswitch->node,
|
||||
&rdev->net->switches);
|
||||
}
|
||||
spin_unlock(&rio_global_list_lock);
|
||||
|
||||
rio_create_sysfs_dev_files(rdev);
|
||||
@ -93,6 +198,33 @@ int rio_add_device(struct rio_dev *rdev)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_add_device);
|
||||
|
||||
/*
|
||||
* rio_del_device - removes a RIO device from the device model
|
||||
* @rdev: RIO device
|
||||
* @state: device state to set during removal process
|
||||
*
|
||||
* Removes the RIO device to the kernel device list and subsystem's device list.
|
||||
* Clears sysfs entries for the removed device.
|
||||
*/
|
||||
void rio_del_device(struct rio_dev *rdev, enum rio_device_state state)
|
||||
{
|
||||
pr_debug("RIO: %s: removing %s\n", __func__, rio_name(rdev));
|
||||
atomic_set(&rdev->state, state);
|
||||
spin_lock(&rio_global_list_lock);
|
||||
list_del(&rdev->global_list);
|
||||
if (rdev->net) {
|
||||
list_del(&rdev->net_list);
|
||||
if (rdev->pef & RIO_PEF_SWITCH) {
|
||||
list_del(&rdev->rswitch->node);
|
||||
kfree(rdev->rswitch->route_table);
|
||||
}
|
||||
}
|
||||
spin_unlock(&rio_global_list_lock);
|
||||
rio_remove_sysfs_dev_files(rdev);
|
||||
device_unregister(&rdev->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_del_device);
|
||||
|
||||
/**
|
||||
* rio_request_inb_mbox - request inbound mailbox service
|
||||
* @mport: RIO master port from which to allocate the mailbox resource
|
||||
@ -258,7 +390,9 @@ rio_setup_inb_dbell(struct rio_mport *mport, void *dev_id, struct resource *res,
|
||||
dbell->dinb = dinb;
|
||||
dbell->dev_id = dev_id;
|
||||
|
||||
mutex_lock(&mport->lock);
|
||||
list_add_tail(&dbell->node, &mport->dbells);
|
||||
mutex_unlock(&mport->lock);
|
||||
|
||||
out:
|
||||
return rc;
|
||||
@ -322,12 +456,15 @@ int rio_release_inb_dbell(struct rio_mport *mport, u16 start, u16 end)
|
||||
int rc = 0, found = 0;
|
||||
struct rio_dbell *dbell;
|
||||
|
||||
mutex_lock(&mport->lock);
|
||||
list_for_each_entry(dbell, &mport->dbells, node) {
|
||||
if ((dbell->res->start == start) && (dbell->res->end == end)) {
|
||||
list_del(&dbell->node);
|
||||
found = 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&mport->lock);
|
||||
|
||||
/* If we can't find an exact match, fail */
|
||||
if (!found) {
|
||||
@ -335,9 +472,6 @@ int rio_release_inb_dbell(struct rio_mport *mport, u16 start, u16 end)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* Delete from list */
|
||||
list_del(&dbell->node);
|
||||
|
||||
/* Release the doorbell resource */
|
||||
rc = release_resource(dbell->res);
|
||||
|
||||
@ -394,7 +528,71 @@ int rio_release_outb_dbell(struct rio_dev *rdev, struct resource *res)
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_request_inb_pwrite - request inbound port-write message service
|
||||
* rio_add_mport_pw_handler - add port-write message handler into the list
|
||||
* of mport specific pw handlers
|
||||
* @mport: RIO master port to bind the portwrite callback
|
||||
* @context: Handler specific context to pass on event
|
||||
* @pwcback: Callback to execute when portwrite is received
|
||||
*
|
||||
* Returns 0 if the request has been satisfied.
|
||||
*/
|
||||
int rio_add_mport_pw_handler(struct rio_mport *mport, void *context,
|
||||
int (*pwcback)(struct rio_mport *mport,
|
||||
void *context, union rio_pw_msg *msg, int step))
|
||||
{
|
||||
int rc = 0;
|
||||
struct rio_pwrite *pwrite;
|
||||
|
||||
pwrite = kzalloc(sizeof(struct rio_pwrite), GFP_KERNEL);
|
||||
if (!pwrite) {
|
||||
rc = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
pwrite->pwcback = pwcback;
|
||||
pwrite->context = context;
|
||||
mutex_lock(&mport->lock);
|
||||
list_add_tail(&pwrite->node, &mport->pwrites);
|
||||
mutex_unlock(&mport->lock);
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_add_mport_pw_handler);
|
||||
|
||||
/**
|
||||
* rio_del_mport_pw_handler - remove port-write message handler from the list
|
||||
* of mport specific pw handlers
|
||||
* @mport: RIO master port to bind the portwrite callback
|
||||
* @context: Registered handler specific context to pass on event
|
||||
* @pwcback: Registered callback function
|
||||
*
|
||||
* Returns 0 if the request has been satisfied.
|
||||
*/
|
||||
int rio_del_mport_pw_handler(struct rio_mport *mport, void *context,
|
||||
int (*pwcback)(struct rio_mport *mport,
|
||||
void *context, union rio_pw_msg *msg, int step))
|
||||
{
|
||||
int rc = -EINVAL;
|
||||
struct rio_pwrite *pwrite;
|
||||
|
||||
mutex_lock(&mport->lock);
|
||||
list_for_each_entry(pwrite, &mport->pwrites, node) {
|
||||
if (pwrite->pwcback == pwcback && pwrite->context == context) {
|
||||
list_del(&pwrite->node);
|
||||
kfree(pwrite);
|
||||
rc = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&mport->lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_del_mport_pw_handler);
|
||||
|
||||
/**
|
||||
* rio_request_inb_pwrite - request inbound port-write message service for
|
||||
* specific RapidIO device
|
||||
* @rdev: RIO device to which register inbound port-write callback routine
|
||||
* @pwcback: Callback routine to execute when port-write is received
|
||||
*
|
||||
@ -419,6 +617,7 @@ EXPORT_SYMBOL_GPL(rio_request_inb_pwrite);
|
||||
|
||||
/**
|
||||
* rio_release_inb_pwrite - release inbound port-write message service
|
||||
* associated with specific RapidIO device
|
||||
* @rdev: RIO device which registered for inbound port-write callback
|
||||
*
|
||||
* Removes callback from the rio_dev structure. Returns 0 if the request
|
||||
@ -439,6 +638,24 @@ int rio_release_inb_pwrite(struct rio_dev *rdev)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_release_inb_pwrite);
|
||||
|
||||
/**
|
||||
* rio_pw_enable - Enables/disables port-write handling by a master port
|
||||
* @mport: Master port associated with port-write handling
|
||||
* @enable: 1=enable, 0=disable
|
||||
*/
|
||||
void rio_pw_enable(struct rio_mport *mport, int enable)
|
||||
{
|
||||
if (mport->ops->pwenable) {
|
||||
mutex_lock(&mport->lock);
|
||||
|
||||
if ((enable && ++mport->pwe_refcnt == 1) ||
|
||||
(!enable && mport->pwe_refcnt && --mport->pwe_refcnt == 0))
|
||||
mport->ops->pwenable(mport, enable);
|
||||
mutex_unlock(&mport->lock);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_pw_enable);
|
||||
|
||||
/**
|
||||
* rio_map_inb_region -- Map inbound memory region.
|
||||
* @mport: Master port.
|
||||
@ -482,6 +699,56 @@ void rio_unmap_inb_region(struct rio_mport *mport, dma_addr_t lstart)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_unmap_inb_region);
|
||||
|
||||
/**
|
||||
* rio_map_outb_region -- Map outbound memory region.
|
||||
* @mport: Master port.
|
||||
* @destid: destination id window points to
|
||||
* @rbase: RIO base address window translates to
|
||||
* @size: Size of the memory region
|
||||
* @rflags: Flags for mapping.
|
||||
* @local: physical address of memory region mapped
|
||||
*
|
||||
* Return: 0 -- Success.
|
||||
*
|
||||
* This function will create the mapping from RIO space to local memory.
|
||||
*/
|
||||
int rio_map_outb_region(struct rio_mport *mport, u16 destid, u64 rbase,
|
||||
u32 size, u32 rflags, dma_addr_t *local)
|
||||
{
|
||||
int rc = 0;
|
||||
unsigned long flags;
|
||||
|
||||
if (!mport->ops->map_outb)
|
||||
return -ENODEV;
|
||||
|
||||
spin_lock_irqsave(&rio_mmap_lock, flags);
|
||||
rc = mport->ops->map_outb(mport, destid, rbase, size,
|
||||
rflags, local);
|
||||
spin_unlock_irqrestore(&rio_mmap_lock, flags);
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_map_outb_region);
|
||||
|
||||
/**
|
||||
* rio_unmap_inb_region -- Unmap the inbound memory region
|
||||
* @mport: Master port
|
||||
* @destid: destination id mapping points to
|
||||
* @rstart: RIO base address window translates to
|
||||
*/
|
||||
void rio_unmap_outb_region(struct rio_mport *mport, u16 destid, u64 rstart)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (!mport->ops->unmap_outb)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&rio_mmap_lock, flags);
|
||||
mport->ops->unmap_outb(mport, destid, rstart);
|
||||
spin_unlock_irqrestore(&rio_mmap_lock, flags);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_unmap_outb_region);
|
||||
|
||||
/**
|
||||
* rio_mport_get_physefb - Helper function that returns register offset
|
||||
* for Physical Layer Extended Features Block.
|
||||
@ -864,52 +1131,66 @@ static int rio_clr_err_stopped(struct rio_dev *rdev, u32 pnum, u32 err_status)
|
||||
}
|
||||
|
||||
/**
|
||||
* rio_inb_pwrite_handler - process inbound port-write message
|
||||
* rio_inb_pwrite_handler - inbound port-write message handler
|
||||
* @mport: mport device associated with port-write
|
||||
* @pw_msg: pointer to inbound port-write message
|
||||
*
|
||||
* Processes an inbound port-write message. Returns 0 if the request
|
||||
* has been satisfied.
|
||||
*/
|
||||
int rio_inb_pwrite_handler(union rio_pw_msg *pw_msg)
|
||||
int rio_inb_pwrite_handler(struct rio_mport *mport, union rio_pw_msg *pw_msg)
|
||||
{
|
||||
struct rio_dev *rdev;
|
||||
u32 err_status, em_perrdet, em_ltlerrdet;
|
||||
int rc, portnum;
|
||||
|
||||
rdev = rio_get_comptag((pw_msg->em.comptag & RIO_CTAG_UDEVID), NULL);
|
||||
if (rdev == NULL) {
|
||||
/* Device removed or enumeration error */
|
||||
pr_debug("RIO: %s No matching device for CTag 0x%08x\n",
|
||||
__func__, pw_msg->em.comptag);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
pr_debug("RIO: Port-Write message from %s\n", rio_name(rdev));
|
||||
struct rio_pwrite *pwrite;
|
||||
|
||||
#ifdef DEBUG_PW
|
||||
{
|
||||
u32 i;
|
||||
for (i = 0; i < RIO_PW_MSG_SIZE/sizeof(u32);) {
|
||||
u32 i;
|
||||
|
||||
pr_debug("%s: PW to mport_%d:\n", __func__, mport->id);
|
||||
for (i = 0; i < RIO_PW_MSG_SIZE / sizeof(u32); i = i + 4) {
|
||||
pr_debug("0x%02x: %08x %08x %08x %08x\n",
|
||||
i*4, pw_msg->raw[i], pw_msg->raw[i + 1],
|
||||
pw_msg->raw[i + 2], pw_msg->raw[i + 3]);
|
||||
i += 4;
|
||||
}
|
||||
i * 4, pw_msg->raw[i], pw_msg->raw[i + 1],
|
||||
pw_msg->raw[i + 2], pw_msg->raw[i + 3]);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Call an external service function (if such is registered
|
||||
* for this device). This may be the service for endpoints that send
|
||||
* device-specific port-write messages. End-point messages expected
|
||||
* to be handled completely by EP specific device driver.
|
||||
rdev = rio_get_comptag((pw_msg->em.comptag & RIO_CTAG_UDEVID), NULL);
|
||||
if (rdev) {
|
||||
pr_debug("RIO: Port-Write message from %s\n", rio_name(rdev));
|
||||
} else {
|
||||
pr_debug("RIO: %s No matching device for CTag 0x%08x\n",
|
||||
__func__, pw_msg->em.comptag);
|
||||
}
|
||||
|
||||
/* Call a device-specific handler (if it is registered for the device).
|
||||
* This may be the service for endpoints that send device-specific
|
||||
* port-write messages. End-point messages expected to be handled
|
||||
* completely by EP specific device driver.
|
||||
* For switches rc==0 signals that no standard processing required.
|
||||
*/
|
||||
if (rdev->pwcback != NULL) {
|
||||
if (rdev && rdev->pwcback) {
|
||||
rc = rdev->pwcback(rdev, pw_msg, 0);
|
||||
if (rc == 0)
|
||||
return 0;
|
||||
}
|
||||
|
||||
mutex_lock(&mport->lock);
|
||||
list_for_each_entry(pwrite, &mport->pwrites, node)
|
||||
pwrite->pwcback(mport, pwrite->context, pw_msg, 0);
|
||||
mutex_unlock(&mport->lock);
|
||||
|
||||
if (!rdev)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* FIXME: The code below stays as it was before for now until we decide
|
||||
* how to do default PW handling in combination with per-mport callbacks
|
||||
*/
|
||||
|
||||
portnum = pw_msg->em.is_port & 0xFF;
|
||||
|
||||
/* Check if device and route to it are functional:
|
||||
@ -1909,32 +2190,31 @@ static int rio_get_hdid(int index)
|
||||
return hdid[index];
|
||||
}
|
||||
|
||||
int rio_mport_initialize(struct rio_mport *mport)
|
||||
{
|
||||
if (next_portid >= RIO_MAX_MPORTS) {
|
||||
pr_err("RIO: reached specified max number of mports\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
atomic_set(&mport->state, RIO_DEVICE_INITIALIZING);
|
||||
mport->id = next_portid++;
|
||||
mport->host_deviceid = rio_get_hdid(mport->id);
|
||||
mport->nscan = NULL;
|
||||
mutex_init(&mport->lock);
|
||||
mport->pwe_refcnt = 0;
|
||||
INIT_LIST_HEAD(&mport->pwrites);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_mport_initialize);
|
||||
|
||||
int rio_register_mport(struct rio_mport *port)
|
||||
{
|
||||
struct rio_scan_node *scan = NULL;
|
||||
int res = 0;
|
||||
|
||||
if (next_portid >= RIO_MAX_MPORTS) {
|
||||
pr_err("RIO: reached specified max number of mports\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
port->id = next_portid++;
|
||||
port->host_deviceid = rio_get_hdid(port->id);
|
||||
port->nscan = NULL;
|
||||
|
||||
dev_set_name(&port->dev, "rapidio%d", port->id);
|
||||
port->dev.class = &rio_mport_class;
|
||||
|
||||
res = device_register(&port->dev);
|
||||
if (res)
|
||||
dev_err(&port->dev, "RIO: mport%d registration failed ERR=%d\n",
|
||||
port->id, res);
|
||||
else
|
||||
dev_dbg(&port->dev, "RIO: mport%d registered\n", port->id);
|
||||
|
||||
mutex_lock(&rio_mport_list_lock);
|
||||
list_add_tail(&port->node, &rio_mports);
|
||||
|
||||
/*
|
||||
* Check if there are any registered enumeration/discovery operations
|
||||
@ -1948,13 +2228,74 @@ int rio_register_mport(struct rio_mport *port)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
list_add_tail(&port->node, &rio_mports);
|
||||
mutex_unlock(&rio_mport_list_lock);
|
||||
|
||||
pr_debug("RIO: %s %s id=%d\n", __func__, port->name, port->id);
|
||||
return 0;
|
||||
dev_set_name(&port->dev, "rapidio%d", port->id);
|
||||
port->dev.class = &rio_mport_class;
|
||||
atomic_set(&port->state, RIO_DEVICE_RUNNING);
|
||||
|
||||
res = device_register(&port->dev);
|
||||
if (res)
|
||||
dev_err(&port->dev, "RIO: mport%d registration failed ERR=%d\n",
|
||||
port->id, res);
|
||||
else
|
||||
dev_dbg(&port->dev, "RIO: registered mport%d\n", port->id);
|
||||
|
||||
return res;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_register_mport);
|
||||
|
||||
static int rio_mport_cleanup_callback(struct device *dev, void *data)
|
||||
{
|
||||
struct rio_dev *rdev = to_rio_dev(dev);
|
||||
|
||||
if (dev->bus == &rio_bus_type)
|
||||
rio_del_device(rdev, RIO_DEVICE_SHUTDOWN);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rio_net_remove_children(struct rio_net *net)
|
||||
{
|
||||
/*
|
||||
* Unregister all RapidIO devices residing on this net (this will
|
||||
* invoke notification of registered subsystem interfaces as well).
|
||||
*/
|
||||
device_for_each_child(&net->dev, NULL, rio_mport_cleanup_callback);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int rio_unregister_mport(struct rio_mport *port)
|
||||
{
|
||||
pr_debug("RIO: %s %s id=%d\n", __func__, port->name, port->id);
|
||||
|
||||
/* Transition mport to the SHUTDOWN state */
|
||||
if (atomic_cmpxchg(&port->state,
|
||||
RIO_DEVICE_RUNNING,
|
||||
RIO_DEVICE_SHUTDOWN) != RIO_DEVICE_RUNNING) {
|
||||
pr_err("RIO: %s unexpected state transition for mport %s\n",
|
||||
__func__, port->name);
|
||||
}
|
||||
|
||||
if (port->net && port->net->hport == port) {
|
||||
rio_net_remove_children(port->net);
|
||||
rio_free_net(port->net);
|
||||
}
|
||||
|
||||
/*
|
||||
* Unregister all RapidIO devices attached to this mport (this will
|
||||
* invoke notification of registered subsystem interfaces as well).
|
||||
*/
|
||||
mutex_lock(&rio_mport_list_lock);
|
||||
list_del(&port->node);
|
||||
mutex_unlock(&rio_mport_list_lock);
|
||||
device_unregister(&port->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rio_unregister_mport);
|
||||
|
||||
EXPORT_SYMBOL_GPL(rio_local_get_device_id);
|
||||
EXPORT_SYMBOL_GPL(rio_get_device);
|
||||
EXPORT_SYMBOL_GPL(rio_get_asm);
|
||||
|
@ -28,6 +28,7 @@ extern u32 rio_mport_get_efb(struct rio_mport *port, int local, u16 destid,
|
||||
extern int rio_mport_chk_dev_access(struct rio_mport *mport, u16 destid,
|
||||
u8 hopcount);
|
||||
extern int rio_create_sysfs_dev_files(struct rio_dev *rdev);
|
||||
extern void rio_remove_sysfs_dev_files(struct rio_dev *rdev);
|
||||
extern int rio_lock_device(struct rio_mport *port, u16 destid,
|
||||
u8 hopcount, int wait_ms);
|
||||
extern int rio_unlock_device(struct rio_mport *port, u16 destid, u8 hopcount);
|
||||
@ -38,7 +39,11 @@ extern int rio_route_get_entry(struct rio_dev *rdev, u16 table,
|
||||
extern int rio_route_clr_table(struct rio_dev *rdev, u16 table, int lock);
|
||||
extern int rio_set_port_lockout(struct rio_dev *rdev, u32 pnum, int lock);
|
||||
extern struct rio_dev *rio_get_comptag(u32 comp_tag, struct rio_dev *from);
|
||||
extern struct rio_net *rio_alloc_net(struct rio_mport *mport);
|
||||
extern int rio_add_net(struct rio_net *net);
|
||||
extern void rio_free_net(struct rio_net *net);
|
||||
extern int rio_add_device(struct rio_dev *rdev);
|
||||
extern void rio_del_device(struct rio_dev *rdev, enum rio_device_state state);
|
||||
extern int rio_enable_rx_tx_port(struct rio_mport *port, int local, u16 destid,
|
||||
u8 hopcount, u8 port_num);
|
||||
extern int rio_register_scan(int mport_id, struct rio_scan *scan_ops);
|
||||
|
@ -657,7 +657,7 @@ static inline int ll_need_32bit_api(struct ll_sb_info *sbi)
|
||||
#if BITS_PER_LONG == 32
|
||||
return 1;
|
||||
#elif defined(CONFIG_COMPAT)
|
||||
return unlikely(is_compat_task() || (sbi->ll_flags & LL_SBI_32BIT_API));
|
||||
return unlikely(in_compat_syscall() || (sbi->ll_flags & LL_SBI_32BIT_API));
|
||||
#else
|
||||
return unlikely(sbi->ll_flags & LL_SBI_32BIT_API);
|
||||
#endif
|
||||
|
@ -484,7 +484,7 @@ static int hpwdt_pretimeout(unsigned int ulReason, struct pt_regs *regs)
|
||||
static int die_nmi_called;
|
||||
|
||||
if (!hpwdt_nmi_decoding)
|
||||
goto out;
|
||||
return NMI_DONE;
|
||||
|
||||
spin_lock_irqsave(&rom_lock, rom_pl);
|
||||
if (!die_nmi_called && !is_icru && !is_uefi)
|
||||
@ -497,11 +497,11 @@ static int hpwdt_pretimeout(unsigned int ulReason, struct pt_regs *regs)
|
||||
|
||||
if (!is_icru && !is_uefi) {
|
||||
if (cmn_regs.u1.ral == 0) {
|
||||
panic("An NMI occurred, "
|
||||
"but unable to determine source.\n");
|
||||
nmi_panic(regs, "An NMI occurred, but unable to determine source.\n");
|
||||
return NMI_HANDLED;
|
||||
}
|
||||
}
|
||||
panic("An NMI occurred. Depending on your system the reason "
|
||||
nmi_panic(regs, "An NMI occurred. Depending on your system the reason "
|
||||
"for the NMI is logged in any one of the following "
|
||||
"resources:\n"
|
||||
"1. Integrated Management Log (IML)\n"
|
||||
@ -509,8 +509,7 @@ static int hpwdt_pretimeout(unsigned int ulReason, struct pt_regs *regs)
|
||||
"3. OA Forward Progress Log\n"
|
||||
"4. iLO Event Log");
|
||||
|
||||
out:
|
||||
return NMI_DONE;
|
||||
return NMI_HANDLED;
|
||||
}
|
||||
#endif /* CONFIG_HPWDT_NMI_DECODING */
|
||||
|
||||
|
@ -32,6 +32,9 @@
|
||||
#include <linux/pipe_fs_i.h>
|
||||
#include <linux/oom.h>
|
||||
#include <linux/compat.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/path.h>
|
||||
#include <linux/timekeeping.h>
|
||||
|
||||
#include <asm/uaccess.h>
|
||||
@ -649,6 +652,8 @@ void do_coredump(const siginfo_t *siginfo)
|
||||
}
|
||||
} else {
|
||||
struct inode *inode;
|
||||
int open_flags = O_CREAT | O_RDWR | O_NOFOLLOW |
|
||||
O_LARGEFILE | O_EXCL;
|
||||
|
||||
if (cprm.limit < binfmt->min_coredump)
|
||||
goto fail_unlock;
|
||||
@ -687,10 +692,27 @@ void do_coredump(const siginfo_t *siginfo)
|
||||
* what matters is that at least one of the two processes
|
||||
* writes its coredump successfully, not which one.
|
||||
*/
|
||||
cprm.file = filp_open(cn.corename,
|
||||
O_CREAT | 2 | O_NOFOLLOW |
|
||||
O_LARGEFILE | O_EXCL,
|
||||
0600);
|
||||
if (need_suid_safe) {
|
||||
/*
|
||||
* Using user namespaces, normal user tasks can change
|
||||
* their current->fs->root to point to arbitrary
|
||||
* directories. Since the intention of the "only dump
|
||||
* with a fully qualified path" rule is to control where
|
||||
* coredumps may be placed using root privileges,
|
||||
* current->fs->root must not be used. Instead, use the
|
||||
* root directory of init_task.
|
||||
*/
|
||||
struct path root;
|
||||
|
||||
task_lock(&init_task);
|
||||
get_fs_root(init_task.fs, &root);
|
||||
task_unlock(&init_task);
|
||||
cprm.file = file_open_root(root.dentry, root.mnt,
|
||||
cn.corename, open_flags, 0600);
|
||||
path_put(&root);
|
||||
} else {
|
||||
cprm.file = filp_open(cn.corename, open_flags, 0600);
|
||||
}
|
||||
if (IS_ERR(cprm.file))
|
||||
goto fail_unlock;
|
||||
|
||||
|
42
fs/eventfd.c
42
fs/eventfd.c
@ -121,8 +121,46 @@ static unsigned int eventfd_poll(struct file *file, poll_table *wait)
|
||||
u64 count;
|
||||
|
||||
poll_wait(file, &ctx->wqh, wait);
|
||||
smp_rmb();
|
||||
count = ctx->count;
|
||||
|
||||
/*
|
||||
* All writes to ctx->count occur within ctx->wqh.lock. This read
|
||||
* can be done outside ctx->wqh.lock because we know that poll_wait
|
||||
* takes that lock (through add_wait_queue) if our caller will sleep.
|
||||
*
|
||||
* The read _can_ therefore seep into add_wait_queue's critical
|
||||
* section, but cannot move above it! add_wait_queue's spin_lock acts
|
||||
* as an acquire barrier and ensures that the read be ordered properly
|
||||
* against the writes. The following CAN happen and is safe:
|
||||
*
|
||||
* poll write
|
||||
* ----------------- ------------
|
||||
* lock ctx->wqh.lock (in poll_wait)
|
||||
* count = ctx->count
|
||||
* __add_wait_queue
|
||||
* unlock ctx->wqh.lock
|
||||
* lock ctx->qwh.lock
|
||||
* ctx->count += n
|
||||
* if (waitqueue_active)
|
||||
* wake_up_locked_poll
|
||||
* unlock ctx->qwh.lock
|
||||
* eventfd_poll returns 0
|
||||
*
|
||||
* but the following, which would miss a wakeup, cannot happen:
|
||||
*
|
||||
* poll write
|
||||
* ----------------- ------------
|
||||
* count = ctx->count (INVALID!)
|
||||
* lock ctx->qwh.lock
|
||||
* ctx->count += n
|
||||
* **waitqueue_active is false**
|
||||
* **no wake_up_locked_poll!**
|
||||
* unlock ctx->qwh.lock
|
||||
* lock ctx->wqh.lock (in poll_wait)
|
||||
* __add_wait_queue
|
||||
* unlock ctx->wqh.lock
|
||||
* eventfd_poll returns 0
|
||||
*/
|
||||
count = READ_ONCE(ctx->count);
|
||||
|
||||
if (count > 0)
|
||||
events |= POLLIN;
|
||||
|
@ -285,7 +285,7 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx)
|
||||
static inline int is_32bit_api(void)
|
||||
{
|
||||
#ifdef CONFIG_COMPAT
|
||||
return is_compat_task();
|
||||
return in_compat_syscall();
|
||||
#else
|
||||
return (BITS_PER_LONG == 32);
|
||||
#endif
|
||||
|
@ -93,8 +93,24 @@ config FAT_DEFAULT_IOCHARSET
|
||||
that most of your FAT filesystems use, and can be overridden
|
||||
with the "iocharset" mount option for FAT filesystems.
|
||||
Note that "utf8" is not recommended for FAT filesystems.
|
||||
If unsure, you shouldn't set "utf8" here.
|
||||
If unsure, you shouldn't set "utf8" here - select the next option
|
||||
instead if you would like to use UTF-8 encoded file names by default.
|
||||
See <file:Documentation/filesystems/vfat.txt> for more information.
|
||||
|
||||
Enable any character sets you need in File Systems/Native Language
|
||||
Support.
|
||||
|
||||
config FAT_DEFAULT_UTF8
|
||||
bool "Enable FAT UTF-8 option by default"
|
||||
depends on VFAT_FS
|
||||
default n
|
||||
help
|
||||
Set this if you would like to have "utf8" mount option set
|
||||
by default when mounting FAT filesystems.
|
||||
|
||||
Even if you say Y here can always disable UTF-8 for
|
||||
particular mount by adding "utf8=0" to mount options.
|
||||
|
||||
Say Y if you use UTF-8 encoding for file names, N otherwise.
|
||||
|
||||
See <file:Documentation/filesystems/vfat.txt> for more information.
|
||||
|
@ -1127,7 +1127,7 @@ static int parse_options(struct super_block *sb, char *options, int is_vfat,
|
||||
}
|
||||
opts->name_check = 'n';
|
||||
opts->quiet = opts->showexec = opts->sys_immutable = opts->dotsOK = 0;
|
||||
opts->utf8 = opts->unicode_xlate = 0;
|
||||
opts->unicode_xlate = 0;
|
||||
opts->numtail = 1;
|
||||
opts->usefree = opts->nocase = 0;
|
||||
opts->tz_set = 0;
|
||||
@ -1135,6 +1135,8 @@ static int parse_options(struct super_block *sb, char *options, int is_vfat,
|
||||
opts->errors = FAT_ERRORS_RO;
|
||||
*debug = 0;
|
||||
|
||||
opts->utf8 = IS_ENABLED(CONFIG_FAT_DEFAULT_UTF8) && is_vfat;
|
||||
|
||||
if (!options)
|
||||
goto out;
|
||||
|
||||
|
@ -228,7 +228,7 @@ long do_handle_open(int mountdirfd,
|
||||
path_put(&path);
|
||||
return fd;
|
||||
}
|
||||
file = file_open_root(path.dentry, path.mnt, "", open_flag);
|
||||
file = file_open_root(path.dentry, path.mnt, "", open_flag, 0);
|
||||
if (IS_ERR(file)) {
|
||||
put_unused_fd(fd);
|
||||
retval = PTR_ERR(file);
|
||||
|
@ -41,7 +41,8 @@ ocfs2-objs := \
|
||||
quota_local.o \
|
||||
quota_global.o \
|
||||
xattr.o \
|
||||
acl.o
|
||||
acl.o \
|
||||
filecheck.o
|
||||
|
||||
ocfs2_stackglue-objs := stackglue.o
|
||||
ocfs2_stack_o2cb-objs := stack_o2cb.o
|
||||
|
606
fs/ocfs2/filecheck.c
Normal file
606
fs/ocfs2/filecheck.c
Normal file
@ -0,0 +1,606 @@
|
||||
/* -*- mode: c; c-basic-offset: 8; -*-
|
||||
* vim: noexpandtab sw=8 ts=8 sts=0:
|
||||
*
|
||||
* filecheck.c
|
||||
*
|
||||
* Code which implements online file check.
|
||||
*
|
||||
* Copyright (C) 2016 SuSE. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License as published by the Free Software Foundation, version 2.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/list.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/kmod.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/kobject.h>
|
||||
#include <linux/sysfs.h>
|
||||
#include <linux/sysctl.h>
|
||||
#include <cluster/masklog.h>
|
||||
|
||||
#include "ocfs2.h"
|
||||
#include "ocfs2_fs.h"
|
||||
#include "stackglue.h"
|
||||
#include "inode.h"
|
||||
|
||||
#include "filecheck.h"
|
||||
|
||||
|
||||
/* File check error strings,
|
||||
* must correspond with error number in header file.
|
||||
*/
|
||||
static const char * const ocfs2_filecheck_errs[] = {
|
||||
"SUCCESS",
|
||||
"FAILED",
|
||||
"INPROGRESS",
|
||||
"READONLY",
|
||||
"INJBD",
|
||||
"INVALIDINO",
|
||||
"BLOCKECC",
|
||||
"BLOCKNO",
|
||||
"VALIDFLAG",
|
||||
"GENERATION",
|
||||
"UNSUPPORTED"
|
||||
};
|
||||
|
||||
static DEFINE_SPINLOCK(ocfs2_filecheck_sysfs_lock);
|
||||
static LIST_HEAD(ocfs2_filecheck_sysfs_list);
|
||||
|
||||
struct ocfs2_filecheck {
|
||||
struct list_head fc_head; /* File check entry list head */
|
||||
spinlock_t fc_lock;
|
||||
unsigned int fc_max; /* Maximum number of entry in list */
|
||||
unsigned int fc_size; /* Current entry count in list */
|
||||
unsigned int fc_done; /* Finished entry count in list */
|
||||
};
|
||||
|
||||
struct ocfs2_filecheck_sysfs_entry { /* sysfs entry per mounting */
|
||||
struct list_head fs_list;
|
||||
atomic_t fs_count;
|
||||
struct super_block *fs_sb;
|
||||
struct kset *fs_devicekset;
|
||||
struct kset *fs_fcheckkset;
|
||||
struct ocfs2_filecheck *fs_fcheck;
|
||||
};
|
||||
|
||||
#define OCFS2_FILECHECK_MAXSIZE 100
|
||||
#define OCFS2_FILECHECK_MINSIZE 10
|
||||
|
||||
/* File check operation type */
|
||||
enum {
|
||||
OCFS2_FILECHECK_TYPE_CHK = 0, /* Check a file(inode) */
|
||||
OCFS2_FILECHECK_TYPE_FIX, /* Fix a file(inode) */
|
||||
OCFS2_FILECHECK_TYPE_SET = 100 /* Set entry list maximum size */
|
||||
};
|
||||
|
||||
struct ocfs2_filecheck_entry {
|
||||
struct list_head fe_list;
|
||||
unsigned long fe_ino;
|
||||
unsigned int fe_type;
|
||||
unsigned int fe_done:1;
|
||||
unsigned int fe_status:31;
|
||||
};
|
||||
|
||||
struct ocfs2_filecheck_args {
|
||||
unsigned int fa_type;
|
||||
union {
|
||||
unsigned long fa_ino;
|
||||
unsigned int fa_len;
|
||||
};
|
||||
};
|
||||
|
||||
static const char *
|
||||
ocfs2_filecheck_error(int errno)
|
||||
{
|
||||
if (!errno)
|
||||
return ocfs2_filecheck_errs[errno];
|
||||
|
||||
BUG_ON(errno < OCFS2_FILECHECK_ERR_START ||
|
||||
errno > OCFS2_FILECHECK_ERR_END);
|
||||
return ocfs2_filecheck_errs[errno - OCFS2_FILECHECK_ERR_START + 1];
|
||||
}
|
||||
|
||||
static ssize_t ocfs2_filecheck_show(struct kobject *kobj,
|
||||
struct kobj_attribute *attr,
|
||||
char *buf);
|
||||
static ssize_t ocfs2_filecheck_store(struct kobject *kobj,
|
||||
struct kobj_attribute *attr,
|
||||
const char *buf, size_t count);
|
||||
static struct kobj_attribute ocfs2_attr_filecheck_chk =
|
||||
__ATTR(check, S_IRUSR | S_IWUSR,
|
||||
ocfs2_filecheck_show,
|
||||
ocfs2_filecheck_store);
|
||||
static struct kobj_attribute ocfs2_attr_filecheck_fix =
|
||||
__ATTR(fix, S_IRUSR | S_IWUSR,
|
||||
ocfs2_filecheck_show,
|
||||
ocfs2_filecheck_store);
|
||||
static struct kobj_attribute ocfs2_attr_filecheck_set =
|
||||
__ATTR(set, S_IRUSR | S_IWUSR,
|
||||
ocfs2_filecheck_show,
|
||||
ocfs2_filecheck_store);
|
||||
|
||||
static int ocfs2_filecheck_sysfs_wait(atomic_t *p)
|
||||
{
|
||||
schedule();
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
ocfs2_filecheck_sysfs_free(struct ocfs2_filecheck_sysfs_entry *entry)
|
||||
{
|
||||
struct ocfs2_filecheck_entry *p;
|
||||
|
||||
if (!atomic_dec_and_test(&entry->fs_count))
|
||||
wait_on_atomic_t(&entry->fs_count, ocfs2_filecheck_sysfs_wait,
|
||||
TASK_UNINTERRUPTIBLE);
|
||||
|
||||
spin_lock(&entry->fs_fcheck->fc_lock);
|
||||
while (!list_empty(&entry->fs_fcheck->fc_head)) {
|
||||
p = list_first_entry(&entry->fs_fcheck->fc_head,
|
||||
struct ocfs2_filecheck_entry, fe_list);
|
||||
list_del(&p->fe_list);
|
||||
BUG_ON(!p->fe_done); /* To free a undone file check entry */
|
||||
kfree(p);
|
||||
}
|
||||
spin_unlock(&entry->fs_fcheck->fc_lock);
|
||||
|
||||
kset_unregister(entry->fs_fcheckkset);
|
||||
kset_unregister(entry->fs_devicekset);
|
||||
kfree(entry->fs_fcheck);
|
||||
kfree(entry);
|
||||
}
|
||||
|
||||
static void
|
||||
ocfs2_filecheck_sysfs_add(struct ocfs2_filecheck_sysfs_entry *entry)
|
||||
{
|
||||
spin_lock(&ocfs2_filecheck_sysfs_lock);
|
||||
list_add_tail(&entry->fs_list, &ocfs2_filecheck_sysfs_list);
|
||||
spin_unlock(&ocfs2_filecheck_sysfs_lock);
|
||||
}
|
||||
|
||||
static int ocfs2_filecheck_sysfs_del(const char *devname)
|
||||
{
|
||||
struct ocfs2_filecheck_sysfs_entry *p;
|
||||
|
||||
spin_lock(&ocfs2_filecheck_sysfs_lock);
|
||||
list_for_each_entry(p, &ocfs2_filecheck_sysfs_list, fs_list) {
|
||||
if (!strcmp(p->fs_sb->s_id, devname)) {
|
||||
list_del(&p->fs_list);
|
||||
spin_unlock(&ocfs2_filecheck_sysfs_lock);
|
||||
ocfs2_filecheck_sysfs_free(p);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
spin_unlock(&ocfs2_filecheck_sysfs_lock);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static void
|
||||
ocfs2_filecheck_sysfs_put(struct ocfs2_filecheck_sysfs_entry *entry)
|
||||
{
|
||||
if (atomic_dec_and_test(&entry->fs_count))
|
||||
wake_up_atomic_t(&entry->fs_count);
|
||||
}
|
||||
|
||||
static struct ocfs2_filecheck_sysfs_entry *
|
||||
ocfs2_filecheck_sysfs_get(const char *devname)
|
||||
{
|
||||
struct ocfs2_filecheck_sysfs_entry *p = NULL;
|
||||
|
||||
spin_lock(&ocfs2_filecheck_sysfs_lock);
|
||||
list_for_each_entry(p, &ocfs2_filecheck_sysfs_list, fs_list) {
|
||||
if (!strcmp(p->fs_sb->s_id, devname)) {
|
||||
atomic_inc(&p->fs_count);
|
||||
spin_unlock(&ocfs2_filecheck_sysfs_lock);
|
||||
return p;
|
||||
}
|
||||
}
|
||||
spin_unlock(&ocfs2_filecheck_sysfs_lock);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int ocfs2_filecheck_create_sysfs(struct super_block *sb)
|
||||
{
|
||||
int ret = 0;
|
||||
struct kset *device_kset = NULL;
|
||||
struct kset *fcheck_kset = NULL;
|
||||
struct ocfs2_filecheck *fcheck = NULL;
|
||||
struct ocfs2_filecheck_sysfs_entry *entry = NULL;
|
||||
struct attribute **attrs = NULL;
|
||||
struct attribute_group attrgp;
|
||||
|
||||
if (!ocfs2_kset)
|
||||
return -ENOMEM;
|
||||
|
||||
attrs = kmalloc(sizeof(struct attribute *) * 4, GFP_NOFS);
|
||||
if (!attrs) {
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
} else {
|
||||
attrs[0] = &ocfs2_attr_filecheck_chk.attr;
|
||||
attrs[1] = &ocfs2_attr_filecheck_fix.attr;
|
||||
attrs[2] = &ocfs2_attr_filecheck_set.attr;
|
||||
attrs[3] = NULL;
|
||||
memset(&attrgp, 0, sizeof(attrgp));
|
||||
attrgp.attrs = attrs;
|
||||
}
|
||||
|
||||
fcheck = kmalloc(sizeof(struct ocfs2_filecheck), GFP_NOFS);
|
||||
if (!fcheck) {
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
} else {
|
||||
INIT_LIST_HEAD(&fcheck->fc_head);
|
||||
spin_lock_init(&fcheck->fc_lock);
|
||||
fcheck->fc_max = OCFS2_FILECHECK_MINSIZE;
|
||||
fcheck->fc_size = 0;
|
||||
fcheck->fc_done = 0;
|
||||
}
|
||||
|
||||
if (strlen(sb->s_id) <= 0) {
|
||||
mlog(ML_ERROR,
|
||||
"Cannot get device basename when create filecheck sysfs\n");
|
||||
ret = -ENODEV;
|
||||
goto error;
|
||||
}
|
||||
|
||||
device_kset = kset_create_and_add(sb->s_id, NULL, &ocfs2_kset->kobj);
|
||||
if (!device_kset) {
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
}
|
||||
|
||||
fcheck_kset = kset_create_and_add("filecheck", NULL,
|
||||
&device_kset->kobj);
|
||||
if (!fcheck_kset) {
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
}
|
||||
|
||||
ret = sysfs_create_group(&fcheck_kset->kobj, &attrgp);
|
||||
if (ret)
|
||||
goto error;
|
||||
|
||||
entry = kmalloc(sizeof(struct ocfs2_filecheck_sysfs_entry), GFP_NOFS);
|
||||
if (!entry) {
|
||||
ret = -ENOMEM;
|
||||
goto error;
|
||||
} else {
|
||||
atomic_set(&entry->fs_count, 1);
|
||||
entry->fs_sb = sb;
|
||||
entry->fs_devicekset = device_kset;
|
||||
entry->fs_fcheckkset = fcheck_kset;
|
||||
entry->fs_fcheck = fcheck;
|
||||
ocfs2_filecheck_sysfs_add(entry);
|
||||
}
|
||||
|
||||
kfree(attrs);
|
||||
return 0;
|
||||
|
||||
error:
|
||||
kfree(attrs);
|
||||
kfree(entry);
|
||||
kfree(fcheck);
|
||||
kset_unregister(fcheck_kset);
|
||||
kset_unregister(device_kset);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int ocfs2_filecheck_remove_sysfs(struct super_block *sb)
|
||||
{
|
||||
return ocfs2_filecheck_sysfs_del(sb->s_id);
|
||||
}
|
||||
|
||||
static int
|
||||
ocfs2_filecheck_erase_entries(struct ocfs2_filecheck_sysfs_entry *ent,
|
||||
unsigned int count);
|
||||
static int
|
||||
ocfs2_filecheck_adjust_max(struct ocfs2_filecheck_sysfs_entry *ent,
|
||||
unsigned int len)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if ((len < OCFS2_FILECHECK_MINSIZE) || (len > OCFS2_FILECHECK_MAXSIZE))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&ent->fs_fcheck->fc_lock);
|
||||
if (len < (ent->fs_fcheck->fc_size - ent->fs_fcheck->fc_done)) {
|
||||
mlog(ML_ERROR,
|
||||
"Cannot set online file check maximum entry number "
|
||||
"to %u due to too many pending entries(%u)\n",
|
||||
len, ent->fs_fcheck->fc_size - ent->fs_fcheck->fc_done);
|
||||
ret = -EBUSY;
|
||||
} else {
|
||||
if (len < ent->fs_fcheck->fc_size)
|
||||
BUG_ON(!ocfs2_filecheck_erase_entries(ent,
|
||||
ent->fs_fcheck->fc_size - len));
|
||||
|
||||
ent->fs_fcheck->fc_max = len;
|
||||
ret = 0;
|
||||
}
|
||||
spin_unlock(&ent->fs_fcheck->fc_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define OCFS2_FILECHECK_ARGS_LEN 24
|
||||
static int
|
||||
ocfs2_filecheck_args_get_long(const char *buf, size_t count,
|
||||
unsigned long *val)
|
||||
{
|
||||
char buffer[OCFS2_FILECHECK_ARGS_LEN];
|
||||
|
||||
memcpy(buffer, buf, count);
|
||||
buffer[count] = '\0';
|
||||
|
||||
if (kstrtoul(buffer, 0, val))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
ocfs2_filecheck_type_parse(const char *name, unsigned int *type)
|
||||
{
|
||||
if (!strncmp(name, "fix", 4))
|
||||
*type = OCFS2_FILECHECK_TYPE_FIX;
|
||||
else if (!strncmp(name, "check", 6))
|
||||
*type = OCFS2_FILECHECK_TYPE_CHK;
|
||||
else if (!strncmp(name, "set", 4))
|
||||
*type = OCFS2_FILECHECK_TYPE_SET;
|
||||
else
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
ocfs2_filecheck_args_parse(const char *name, const char *buf, size_t count,
|
||||
struct ocfs2_filecheck_args *args)
|
||||
{
|
||||
unsigned long val = 0;
|
||||
unsigned int type;
|
||||
|
||||
/* too short/long args length */
|
||||
if ((count < 1) || (count >= OCFS2_FILECHECK_ARGS_LEN))
|
||||
return 1;
|
||||
|
||||
if (ocfs2_filecheck_type_parse(name, &type))
|
||||
return 1;
|
||||
if (ocfs2_filecheck_args_get_long(buf, count, &val))
|
||||
return 1;
|
||||
|
||||
if (val <= 0)
|
||||
return 1;
|
||||
|
||||
args->fa_type = type;
|
||||
if (type == OCFS2_FILECHECK_TYPE_SET)
|
||||
args->fa_len = (unsigned int)val;
|
||||
else
|
||||
args->fa_ino = val;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t ocfs2_filecheck_show(struct kobject *kobj,
|
||||
struct kobj_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
|
||||
ssize_t ret = 0, total = 0, remain = PAGE_SIZE;
|
||||
unsigned int type;
|
||||
struct ocfs2_filecheck_entry *p;
|
||||
struct ocfs2_filecheck_sysfs_entry *ent;
|
||||
|
||||
if (ocfs2_filecheck_type_parse(attr->attr.name, &type))
|
||||
return -EINVAL;
|
||||
|
||||
ent = ocfs2_filecheck_sysfs_get(kobj->parent->name);
|
||||
if (!ent) {
|
||||
mlog(ML_ERROR,
|
||||
"Cannot get the corresponding entry via device basename %s\n",
|
||||
kobj->name);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (type == OCFS2_FILECHECK_TYPE_SET) {
|
||||
spin_lock(&ent->fs_fcheck->fc_lock);
|
||||
total = snprintf(buf, remain, "%u\n", ent->fs_fcheck->fc_max);
|
||||
spin_unlock(&ent->fs_fcheck->fc_lock);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
ret = snprintf(buf, remain, "INO\t\tDONE\tERROR\n");
|
||||
total += ret;
|
||||
remain -= ret;
|
||||
spin_lock(&ent->fs_fcheck->fc_lock);
|
||||
list_for_each_entry(p, &ent->fs_fcheck->fc_head, fe_list) {
|
||||
if (p->fe_type != type)
|
||||
continue;
|
||||
|
||||
ret = snprintf(buf + total, remain, "%lu\t\t%u\t%s\n",
|
||||
p->fe_ino, p->fe_done,
|
||||
ocfs2_filecheck_error(p->fe_status));
|
||||
if (ret < 0) {
|
||||
total = ret;
|
||||
break;
|
||||
}
|
||||
if (ret == remain) {
|
||||
/* snprintf() didn't fit */
|
||||
total = -E2BIG;
|
||||
break;
|
||||
}
|
||||
total += ret;
|
||||
remain -= ret;
|
||||
}
|
||||
spin_unlock(&ent->fs_fcheck->fc_lock);
|
||||
|
||||
exit:
|
||||
ocfs2_filecheck_sysfs_put(ent);
|
||||
return total;
|
||||
}
|
||||
|
||||
static int
|
||||
ocfs2_filecheck_erase_entry(struct ocfs2_filecheck_sysfs_entry *ent)
|
||||
{
|
||||
struct ocfs2_filecheck_entry *p;
|
||||
|
||||
list_for_each_entry(p, &ent->fs_fcheck->fc_head, fe_list) {
|
||||
if (p->fe_done) {
|
||||
list_del(&p->fe_list);
|
||||
kfree(p);
|
||||
ent->fs_fcheck->fc_size--;
|
||||
ent->fs_fcheck->fc_done--;
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
ocfs2_filecheck_erase_entries(struct ocfs2_filecheck_sysfs_entry *ent,
|
||||
unsigned int count)
|
||||
{
|
||||
unsigned int i = 0;
|
||||
unsigned int ret = 0;
|
||||
|
||||
while (i++ < count) {
|
||||
if (ocfs2_filecheck_erase_entry(ent))
|
||||
ret++;
|
||||
else
|
||||
break;
|
||||
}
|
||||
|
||||
return (ret == count ? 1 : 0);
|
||||
}
|
||||
|
||||
static void
|
||||
ocfs2_filecheck_done_entry(struct ocfs2_filecheck_sysfs_entry *ent,
|
||||
struct ocfs2_filecheck_entry *entry)
|
||||
{
|
||||
entry->fe_done = 1;
|
||||
spin_lock(&ent->fs_fcheck->fc_lock);
|
||||
ent->fs_fcheck->fc_done++;
|
||||
spin_unlock(&ent->fs_fcheck->fc_lock);
|
||||
}
|
||||
|
||||
static unsigned int
|
||||
ocfs2_filecheck_handle(struct super_block *sb,
|
||||
unsigned long ino, unsigned int flags)
|
||||
{
|
||||
unsigned int ret = OCFS2_FILECHECK_ERR_SUCCESS;
|
||||
struct inode *inode = NULL;
|
||||
int rc;
|
||||
|
||||
inode = ocfs2_iget(OCFS2_SB(sb), ino, flags, 0);
|
||||
if (IS_ERR(inode)) {
|
||||
rc = (int)(-(long)inode);
|
||||
if (rc >= OCFS2_FILECHECK_ERR_START &&
|
||||
rc < OCFS2_FILECHECK_ERR_END)
|
||||
ret = rc;
|
||||
else
|
||||
ret = OCFS2_FILECHECK_ERR_FAILED;
|
||||
} else
|
||||
iput(inode);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void
|
||||
ocfs2_filecheck_handle_entry(struct ocfs2_filecheck_sysfs_entry *ent,
|
||||
struct ocfs2_filecheck_entry *entry)
|
||||
{
|
||||
if (entry->fe_type == OCFS2_FILECHECK_TYPE_CHK)
|
||||
entry->fe_status = ocfs2_filecheck_handle(ent->fs_sb,
|
||||
entry->fe_ino, OCFS2_FI_FLAG_FILECHECK_CHK);
|
||||
else if (entry->fe_type == OCFS2_FILECHECK_TYPE_FIX)
|
||||
entry->fe_status = ocfs2_filecheck_handle(ent->fs_sb,
|
||||
entry->fe_ino, OCFS2_FI_FLAG_FILECHECK_FIX);
|
||||
else
|
||||
entry->fe_status = OCFS2_FILECHECK_ERR_UNSUPPORTED;
|
||||
|
||||
ocfs2_filecheck_done_entry(ent, entry);
|
||||
}
|
||||
|
||||
static ssize_t ocfs2_filecheck_store(struct kobject *kobj,
|
||||
struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct ocfs2_filecheck_args args;
|
||||
struct ocfs2_filecheck_entry *entry;
|
||||
struct ocfs2_filecheck_sysfs_entry *ent;
|
||||
ssize_t ret = 0;
|
||||
|
||||
if (count == 0)
|
||||
return count;
|
||||
|
||||
if (ocfs2_filecheck_args_parse(attr->attr.name, buf, count, &args)) {
|
||||
mlog(ML_ERROR, "Invalid arguments for online file check\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ent = ocfs2_filecheck_sysfs_get(kobj->parent->name);
|
||||
if (!ent) {
|
||||
mlog(ML_ERROR,
|
||||
"Cannot get the corresponding entry via device basename %s\n",
|
||||
kobj->parent->name);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (args.fa_type == OCFS2_FILECHECK_TYPE_SET) {
|
||||
ret = ocfs2_filecheck_adjust_max(ent, args.fa_len);
|
||||
goto exit;
|
||||
}
|
||||
|
||||
entry = kmalloc(sizeof(struct ocfs2_filecheck_entry), GFP_NOFS);
|
||||
if (!entry) {
|
||||
ret = -ENOMEM;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
spin_lock(&ent->fs_fcheck->fc_lock);
|
||||
if ((ent->fs_fcheck->fc_size >= ent->fs_fcheck->fc_max) &&
|
||||
(ent->fs_fcheck->fc_done == 0)) {
|
||||
mlog(ML_ERROR,
|
||||
"Cannot do more file check "
|
||||
"since file check queue(%u) is full now\n",
|
||||
ent->fs_fcheck->fc_max);
|
||||
ret = -EBUSY;
|
||||
kfree(entry);
|
||||
} else {
|
||||
if ((ent->fs_fcheck->fc_size >= ent->fs_fcheck->fc_max) &&
|
||||
(ent->fs_fcheck->fc_done > 0)) {
|
||||
/* Delete the oldest entry which was done,
|
||||
* make sure the entry size in list does
|
||||
* not exceed maximum value
|
||||
*/
|
||||
BUG_ON(!ocfs2_filecheck_erase_entry(ent));
|
||||
}
|
||||
|
||||
entry->fe_ino = args.fa_ino;
|
||||
entry->fe_type = args.fa_type;
|
||||
entry->fe_done = 0;
|
||||
entry->fe_status = OCFS2_FILECHECK_ERR_INPROGRESS;
|
||||
list_add_tail(&entry->fe_list, &ent->fs_fcheck->fc_head);
|
||||
ent->fs_fcheck->fc_size++;
|
||||
}
|
||||
spin_unlock(&ent->fs_fcheck->fc_lock);
|
||||
|
||||
if (!ret)
|
||||
ocfs2_filecheck_handle_entry(ent, entry);
|
||||
|
||||
exit:
|
||||
ocfs2_filecheck_sysfs_put(ent);
|
||||
return (!ret ? count : ret);
|
||||
}
|
49
fs/ocfs2/filecheck.h
Normal file
49
fs/ocfs2/filecheck.h
Normal file
@ -0,0 +1,49 @@
|
||||
/* -*- mode: c; c-basic-offset: 8; -*-
|
||||
* vim: noexpandtab sw=8 ts=8 sts=0:
|
||||
*
|
||||
* filecheck.h
|
||||
*
|
||||
* Online file check.
|
||||
*
|
||||
* Copyright (C) 2016 SuSE. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public
|
||||
* License as published by the Free Software Foundation, version 2.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
|
||||
|
||||
#ifndef FILECHECK_H
|
||||
#define FILECHECK_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/list.h>
|
||||
|
||||
|
||||
/* File check errno */
|
||||
enum {
|
||||
OCFS2_FILECHECK_ERR_SUCCESS = 0, /* Success */
|
||||
OCFS2_FILECHECK_ERR_FAILED = 1000, /* Other failure */
|
||||
OCFS2_FILECHECK_ERR_INPROGRESS, /* In progress */
|
||||
OCFS2_FILECHECK_ERR_READONLY, /* Read only */
|
||||
OCFS2_FILECHECK_ERR_INJBD, /* Buffer in jbd */
|
||||
OCFS2_FILECHECK_ERR_INVALIDINO, /* Invalid ino */
|
||||
OCFS2_FILECHECK_ERR_BLOCKECC, /* Block ecc */
|
||||
OCFS2_FILECHECK_ERR_BLOCKNO, /* Block number */
|
||||
OCFS2_FILECHECK_ERR_VALIDFLAG, /* Inode valid flag */
|
||||
OCFS2_FILECHECK_ERR_GENERATION, /* Inode generation */
|
||||
OCFS2_FILECHECK_ERR_UNSUPPORTED /* Unsupported */
|
||||
};
|
||||
|
||||
#define OCFS2_FILECHECK_ERR_START OCFS2_FILECHECK_ERR_FAILED
|
||||
#define OCFS2_FILECHECK_ERR_END OCFS2_FILECHECK_ERR_UNSUPPORTED
|
||||
|
||||
int ocfs2_filecheck_create_sysfs(struct super_block *sb);
|
||||
int ocfs2_filecheck_remove_sysfs(struct super_block *sb);
|
||||
|
||||
#endif /* FILECHECK_H */
|
225
fs/ocfs2/inode.c
225
fs/ocfs2/inode.c
@ -53,6 +53,7 @@
|
||||
#include "xattr.h"
|
||||
#include "refcounttree.h"
|
||||
#include "ocfs2_trace.h"
|
||||
#include "filecheck.h"
|
||||
|
||||
#include "buffer_head_io.h"
|
||||
|
||||
@ -74,6 +75,14 @@ static int ocfs2_truncate_for_delete(struct ocfs2_super *osb,
|
||||
struct inode *inode,
|
||||
struct buffer_head *fe_bh);
|
||||
|
||||
static int ocfs2_filecheck_read_inode_block_full(struct inode *inode,
|
||||
struct buffer_head **bh,
|
||||
int flags, int type);
|
||||
static int ocfs2_filecheck_validate_inode_block(struct super_block *sb,
|
||||
struct buffer_head *bh);
|
||||
static int ocfs2_filecheck_repair_inode_block(struct super_block *sb,
|
||||
struct buffer_head *bh);
|
||||
|
||||
void ocfs2_set_inode_flags(struct inode *inode)
|
||||
{
|
||||
unsigned int flags = OCFS2_I(inode)->ip_attr;
|
||||
@ -127,6 +136,7 @@ struct inode *ocfs2_ilookup(struct super_block *sb, u64 blkno)
|
||||
struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
|
||||
int sysfile_type)
|
||||
{
|
||||
int rc = 0;
|
||||
struct inode *inode = NULL;
|
||||
struct super_block *sb = osb->sb;
|
||||
struct ocfs2_find_inode_args args;
|
||||
@ -161,12 +171,17 @@ struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 blkno, unsigned flags,
|
||||
}
|
||||
trace_ocfs2_iget5_locked(inode->i_state);
|
||||
if (inode->i_state & I_NEW) {
|
||||
ocfs2_read_locked_inode(inode, &args);
|
||||
rc = ocfs2_read_locked_inode(inode, &args);
|
||||
unlock_new_inode(inode);
|
||||
}
|
||||
if (is_bad_inode(inode)) {
|
||||
iput(inode);
|
||||
inode = ERR_PTR(-ESTALE);
|
||||
if ((flags & OCFS2_FI_FLAG_FILECHECK_CHK) ||
|
||||
(flags & OCFS2_FI_FLAG_FILECHECK_FIX))
|
||||
/* Return OCFS2_FILECHECK_ERR_XXX related errno */
|
||||
inode = ERR_PTR(rc);
|
||||
else
|
||||
inode = ERR_PTR(-ESTALE);
|
||||
goto bail;
|
||||
}
|
||||
|
||||
@ -410,7 +425,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
|
||||
struct ocfs2_super *osb;
|
||||
struct ocfs2_dinode *fe;
|
||||
struct buffer_head *bh = NULL;
|
||||
int status, can_lock;
|
||||
int status, can_lock, lock_level = 0;
|
||||
u32 generation = 0;
|
||||
|
||||
status = -EINVAL;
|
||||
@ -478,7 +493,7 @@ static int ocfs2_read_locked_inode(struct inode *inode,
|
||||
mlog_errno(status);
|
||||
return status;
|
||||
}
|
||||
status = ocfs2_inode_lock(inode, NULL, 0);
|
||||
status = ocfs2_inode_lock(inode, NULL, lock_level);
|
||||
if (status) {
|
||||
make_bad_inode(inode);
|
||||
mlog_errno(status);
|
||||
@ -495,16 +510,32 @@ static int ocfs2_read_locked_inode(struct inode *inode,
|
||||
}
|
||||
|
||||
if (can_lock) {
|
||||
status = ocfs2_read_inode_block_full(inode, &bh,
|
||||
OCFS2_BH_IGNORE_CACHE);
|
||||
if (args->fi_flags & OCFS2_FI_FLAG_FILECHECK_CHK)
|
||||
status = ocfs2_filecheck_read_inode_block_full(inode,
|
||||
&bh, OCFS2_BH_IGNORE_CACHE, 0);
|
||||
else if (args->fi_flags & OCFS2_FI_FLAG_FILECHECK_FIX)
|
||||
status = ocfs2_filecheck_read_inode_block_full(inode,
|
||||
&bh, OCFS2_BH_IGNORE_CACHE, 1);
|
||||
else
|
||||
status = ocfs2_read_inode_block_full(inode,
|
||||
&bh, OCFS2_BH_IGNORE_CACHE);
|
||||
} else {
|
||||
status = ocfs2_read_blocks_sync(osb, args->fi_blkno, 1, &bh);
|
||||
/*
|
||||
* If buffer is in jbd, then its checksum may not have been
|
||||
* computed as yet.
|
||||
*/
|
||||
if (!status && !buffer_jbd(bh))
|
||||
status = ocfs2_validate_inode_block(osb->sb, bh);
|
||||
if (!status && !buffer_jbd(bh)) {
|
||||
if (args->fi_flags & OCFS2_FI_FLAG_FILECHECK_CHK)
|
||||
status = ocfs2_filecheck_validate_inode_block(
|
||||
osb->sb, bh);
|
||||
else if (args->fi_flags & OCFS2_FI_FLAG_FILECHECK_FIX)
|
||||
status = ocfs2_filecheck_repair_inode_block(
|
||||
osb->sb, bh);
|
||||
else
|
||||
status = ocfs2_validate_inode_block(
|
||||
osb->sb, bh);
|
||||
}
|
||||
}
|
||||
if (status < 0) {
|
||||
mlog_errno(status);
|
||||
@ -532,11 +563,24 @@ static int ocfs2_read_locked_inode(struct inode *inode,
|
||||
|
||||
BUG_ON(args->fi_blkno != le64_to_cpu(fe->i_blkno));
|
||||
|
||||
if (buffer_dirty(bh) && !buffer_jbd(bh)) {
|
||||
if (can_lock) {
|
||||
ocfs2_inode_unlock(inode, lock_level);
|
||||
lock_level = 1;
|
||||
ocfs2_inode_lock(inode, NULL, lock_level);
|
||||
}
|
||||
status = ocfs2_write_block(osb, bh, INODE_CACHE(inode));
|
||||
if (status < 0) {
|
||||
mlog_errno(status);
|
||||
goto bail;
|
||||
}
|
||||
}
|
||||
|
||||
status = 0;
|
||||
|
||||
bail:
|
||||
if (can_lock)
|
||||
ocfs2_inode_unlock(inode, 0);
|
||||
ocfs2_inode_unlock(inode, lock_level);
|
||||
|
||||
if (status < 0)
|
||||
make_bad_inode(inode);
|
||||
@ -1397,6 +1441,169 @@ int ocfs2_validate_inode_block(struct super_block *sb,
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int ocfs2_filecheck_validate_inode_block(struct super_block *sb,
|
||||
struct buffer_head *bh)
|
||||
{
|
||||
int rc = 0;
|
||||
struct ocfs2_dinode *di = (struct ocfs2_dinode *)bh->b_data;
|
||||
|
||||
trace_ocfs2_filecheck_validate_inode_block(
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
|
||||
BUG_ON(!buffer_uptodate(bh));
|
||||
|
||||
/*
|
||||
* Call ocfs2_validate_meta_ecc() first since it has ecc repair
|
||||
* function, but we should not return error immediately when ecc
|
||||
* validation fails, because the reason is quite likely the invalid
|
||||
* inode number inputed.
|
||||
*/
|
||||
rc = ocfs2_validate_meta_ecc(sb, bh->b_data, &di->i_check);
|
||||
if (rc) {
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: checksum failed for dinode %llu\n",
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
rc = -OCFS2_FILECHECK_ERR_BLOCKECC;
|
||||
}
|
||||
|
||||
if (!OCFS2_IS_VALID_DINODE(di)) {
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: invalid dinode #%llu: signature = %.*s\n",
|
||||
(unsigned long long)bh->b_blocknr, 7, di->i_signature);
|
||||
rc = -OCFS2_FILECHECK_ERR_INVALIDINO;
|
||||
goto bail;
|
||||
} else if (rc)
|
||||
goto bail;
|
||||
|
||||
if (le64_to_cpu(di->i_blkno) != bh->b_blocknr) {
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: invalid dinode #%llu: i_blkno is %llu\n",
|
||||
(unsigned long long)bh->b_blocknr,
|
||||
(unsigned long long)le64_to_cpu(di->i_blkno));
|
||||
rc = -OCFS2_FILECHECK_ERR_BLOCKNO;
|
||||
goto bail;
|
||||
}
|
||||
|
||||
if (!(di->i_flags & cpu_to_le32(OCFS2_VALID_FL))) {
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: invalid dinode #%llu: OCFS2_VALID_FL "
|
||||
"not set\n",
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
rc = -OCFS2_FILECHECK_ERR_VALIDFLAG;
|
||||
goto bail;
|
||||
}
|
||||
|
||||
if (le32_to_cpu(di->i_fs_generation) !=
|
||||
OCFS2_SB(sb)->fs_generation) {
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: invalid dinode #%llu: fs_generation is %u\n",
|
||||
(unsigned long long)bh->b_blocknr,
|
||||
le32_to_cpu(di->i_fs_generation));
|
||||
rc = -OCFS2_FILECHECK_ERR_GENERATION;
|
||||
goto bail;
|
||||
}
|
||||
|
||||
bail:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int ocfs2_filecheck_repair_inode_block(struct super_block *sb,
|
||||
struct buffer_head *bh)
|
||||
{
|
||||
int changed = 0;
|
||||
struct ocfs2_dinode *di = (struct ocfs2_dinode *)bh->b_data;
|
||||
|
||||
if (!ocfs2_filecheck_validate_inode_block(sb, bh))
|
||||
return 0;
|
||||
|
||||
trace_ocfs2_filecheck_repair_inode_block(
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
|
||||
if (ocfs2_is_hard_readonly(OCFS2_SB(sb)) ||
|
||||
ocfs2_is_soft_readonly(OCFS2_SB(sb))) {
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: cannot repair dinode #%llu "
|
||||
"on readonly filesystem\n",
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
return -OCFS2_FILECHECK_ERR_READONLY;
|
||||
}
|
||||
|
||||
if (buffer_jbd(bh)) {
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: cannot repair dinode #%llu, "
|
||||
"its buffer is in jbd\n",
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
return -OCFS2_FILECHECK_ERR_INJBD;
|
||||
}
|
||||
|
||||
if (!OCFS2_IS_VALID_DINODE(di)) {
|
||||
/* Cannot fix invalid inode block */
|
||||
return -OCFS2_FILECHECK_ERR_INVALIDINO;
|
||||
}
|
||||
|
||||
if (!(di->i_flags & cpu_to_le32(OCFS2_VALID_FL))) {
|
||||
/* Cannot just add VALID_FL flag back as a fix,
|
||||
* need more things to check here.
|
||||
*/
|
||||
return -OCFS2_FILECHECK_ERR_VALIDFLAG;
|
||||
}
|
||||
|
||||
if (le64_to_cpu(di->i_blkno) != bh->b_blocknr) {
|
||||
di->i_blkno = cpu_to_le64(bh->b_blocknr);
|
||||
changed = 1;
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: reset dinode #%llu: i_blkno to %llu\n",
|
||||
(unsigned long long)bh->b_blocknr,
|
||||
(unsigned long long)le64_to_cpu(di->i_blkno));
|
||||
}
|
||||
|
||||
if (le32_to_cpu(di->i_fs_generation) !=
|
||||
OCFS2_SB(sb)->fs_generation) {
|
||||
di->i_fs_generation = cpu_to_le32(OCFS2_SB(sb)->fs_generation);
|
||||
changed = 1;
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: reset dinode #%llu: fs_generation to %u\n",
|
||||
(unsigned long long)bh->b_blocknr,
|
||||
le32_to_cpu(di->i_fs_generation));
|
||||
}
|
||||
|
||||
if (changed || ocfs2_validate_meta_ecc(sb, bh->b_data, &di->i_check)) {
|
||||
ocfs2_compute_meta_ecc(sb, bh->b_data, &di->i_check);
|
||||
mark_buffer_dirty(bh);
|
||||
mlog(ML_ERROR,
|
||||
"Filecheck: reset dinode #%llu: compute meta ecc\n",
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
ocfs2_filecheck_read_inode_block_full(struct inode *inode,
|
||||
struct buffer_head **bh,
|
||||
int flags, int type)
|
||||
{
|
||||
int rc;
|
||||
struct buffer_head *tmp = *bh;
|
||||
|
||||
if (!type) /* Check inode block */
|
||||
rc = ocfs2_read_blocks(INODE_CACHE(inode),
|
||||
OCFS2_I(inode)->ip_blkno,
|
||||
1, &tmp, flags,
|
||||
ocfs2_filecheck_validate_inode_block);
|
||||
else /* Repair inode block */
|
||||
rc = ocfs2_read_blocks(INODE_CACHE(inode),
|
||||
OCFS2_I(inode)->ip_blkno,
|
||||
1, &tmp, flags,
|
||||
ocfs2_filecheck_repair_inode_block);
|
||||
|
||||
/* If ocfs2_read_blocks() got us a new bh, pass it up. */
|
||||
if (!rc && !*bh)
|
||||
*bh = tmp;
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
int ocfs2_read_inode_block_full(struct inode *inode, struct buffer_head **bh,
|
||||
int flags)
|
||||
{
|
||||
|
@ -139,6 +139,9 @@ int ocfs2_drop_inode(struct inode *inode);
|
||||
/* Flags for ocfs2_iget() */
|
||||
#define OCFS2_FI_FLAG_SYSFILE 0x1
|
||||
#define OCFS2_FI_FLAG_ORPHAN_RECOVERY 0x2
|
||||
#define OCFS2_FI_FLAG_FILECHECK_CHK 0x4
|
||||
#define OCFS2_FI_FLAG_FILECHECK_FIX 0x8
|
||||
|
||||
struct inode *ocfs2_ilookup(struct super_block *sb, u64 feoff);
|
||||
struct inode *ocfs2_iget(struct ocfs2_super *osb, u64 feoff, unsigned flags,
|
||||
int sysfile_type);
|
||||
|
@ -1540,6 +1540,8 @@ DEFINE_OCFS2_ULL_INT_EVENT(ocfs2_read_locked_inode);
|
||||
DEFINE_OCFS2_INT_INT_EVENT(ocfs2_check_orphan_recovery_state);
|
||||
|
||||
DEFINE_OCFS2_ULL_EVENT(ocfs2_validate_inode_block);
|
||||
DEFINE_OCFS2_ULL_EVENT(ocfs2_filecheck_validate_inode_block);
|
||||
DEFINE_OCFS2_ULL_EVENT(ocfs2_filecheck_repair_inode_block);
|
||||
|
||||
TRACE_EVENT(ocfs2_inode_is_valid_to_delete,
|
||||
TP_PROTO(void *task, void *dc_task, unsigned long long ino,
|
||||
|
@ -629,7 +629,8 @@ static struct attribute_group ocfs2_attr_group = {
|
||||
.attrs = ocfs2_attrs,
|
||||
};
|
||||
|
||||
static struct kset *ocfs2_kset;
|
||||
struct kset *ocfs2_kset;
|
||||
EXPORT_SYMBOL_GPL(ocfs2_kset);
|
||||
|
||||
static void ocfs2_sysfs_exit(void)
|
||||
{
|
||||
|
@ -298,4 +298,6 @@ void ocfs2_stack_glue_set_max_proto_version(struct ocfs2_protocol_version *max_p
|
||||
int ocfs2_stack_glue_register(struct ocfs2_stack_plugin *plugin);
|
||||
void ocfs2_stack_glue_unregister(struct ocfs2_stack_plugin *plugin);
|
||||
|
||||
extern struct kset *ocfs2_kset;
|
||||
|
||||
#endif /* STACKGLUE_H */
|
||||
|
@ -74,6 +74,7 @@
|
||||
#include "suballoc.h"
|
||||
|
||||
#include "buffer_head_io.h"
|
||||
#include "filecheck.h"
|
||||
|
||||
static struct kmem_cache *ocfs2_inode_cachep;
|
||||
struct kmem_cache *ocfs2_dquot_cachep;
|
||||
@ -1205,6 +1206,9 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
|
||||
/* Start this when the mount is almost sure of being successful */
|
||||
ocfs2_orphan_scan_start(osb);
|
||||
|
||||
/* Create filecheck sysfile /sys/fs/ocfs2/<devname>/filecheck */
|
||||
ocfs2_filecheck_create_sysfs(sb);
|
||||
|
||||
return status;
|
||||
|
||||
read_super_error:
|
||||
@ -1668,6 +1672,7 @@ static void ocfs2_put_super(struct super_block *sb)
|
||||
|
||||
ocfs2_sync_blockdev(sb);
|
||||
ocfs2_dismount_volume(sb, 0);
|
||||
ocfs2_filecheck_remove_sysfs(sb);
|
||||
}
|
||||
|
||||
static int ocfs2_statfs(struct dentry *dentry, struct kstatfs *buf)
|
||||
|
@ -992,14 +992,12 @@ struct file *filp_open(const char *filename, int flags, umode_t mode)
|
||||
EXPORT_SYMBOL(filp_open);
|
||||
|
||||
struct file *file_open_root(struct dentry *dentry, struct vfsmount *mnt,
|
||||
const char *filename, int flags)
|
||||
const char *filename, int flags, umode_t mode)
|
||||
{
|
||||
struct open_flags op;
|
||||
int err = build_open_flags(flags, 0, &op);
|
||||
int err = build_open_flags(flags, mode, &op);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
if (flags & O_CREAT)
|
||||
return ERR_PTR(-EINVAL);
|
||||
return do_file_open_root(dentry, mnt, filename, &op);
|
||||
}
|
||||
EXPORT_SYMBOL(file_open_root);
|
||||
|
@ -40,8 +40,6 @@ struct inode;
|
||||
struct dentry;
|
||||
struct user_namespace;
|
||||
|
||||
struct user_namespace *current_user_ns(void);
|
||||
|
||||
extern const kernel_cap_t __cap_empty_set;
|
||||
extern const kernel_cap_t __cap_init_eff_set;
|
||||
|
||||
|
@ -5,6 +5,8 @@
|
||||
* syscall compatibility layer.
|
||||
*/
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
||||
#include <linux/stat.h>
|
||||
@ -719,9 +721,22 @@ asmlinkage long compat_sys_sched_rr_get_interval(compat_pid_t pid,
|
||||
|
||||
asmlinkage long compat_sys_fanotify_mark(int, unsigned int, __u32, __u32,
|
||||
int, const char __user *);
|
||||
|
||||
/*
|
||||
* For most but not all architectures, "am I in a compat syscall?" and
|
||||
* "am I a compat task?" are the same question. For architectures on which
|
||||
* they aren't the same question, arch code can override in_compat_syscall.
|
||||
*/
|
||||
|
||||
#ifndef in_compat_syscall
|
||||
static inline bool in_compat_syscall(void) { return is_compat_task(); }
|
||||
#endif
|
||||
|
||||
#else
|
||||
|
||||
#define is_compat_task() (0)
|
||||
static inline bool in_compat_syscall(void) { return false; }
|
||||
|
||||
#endif /* CONFIG_COMPAT */
|
||||
|
||||
#endif /* _LINUX_COMPAT_H */
|
||||
|
@ -607,8 +607,6 @@ static inline int cpulist_parse(const char *buf, struct cpumask *dstp)
|
||||
|
||||
/**
|
||||
* cpumask_size - size to allocate for a 'struct cpumask' in bytes
|
||||
*
|
||||
* This will eventually be a runtime variable, depending on nr_cpu_ids.
|
||||
*/
|
||||
static inline size_t cpumask_size(void)
|
||||
{
|
||||
|
@ -377,7 +377,10 @@ extern struct user_namespace init_user_ns;
|
||||
#ifdef CONFIG_USER_NS
|
||||
#define current_user_ns() (current_cred_xxx(user_ns))
|
||||
#else
|
||||
#define current_user_ns() (&init_user_ns)
|
||||
static inline struct user_namespace *current_user_ns(void)
|
||||
{
|
||||
return &init_user_ns;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
|
@ -2263,7 +2263,7 @@ extern long do_sys_open(int dfd, const char __user *filename, int flags,
|
||||
extern struct file *file_open_name(struct filename *, int, umode_t);
|
||||
extern struct file *filp_open(const char *, int, umode_t);
|
||||
extern struct file *file_open_root(struct dentry *, struct vfsmount *,
|
||||
const char *, int);
|
||||
const char *, int, umode_t);
|
||||
extern struct file * dentry_open(const struct path *, int, const struct cred *);
|
||||
extern int filp_close(struct file *, fl_owner_t id);
|
||||
|
||||
|
@ -135,6 +135,7 @@ enum {
|
||||
/* See memremap() kernel-doc for usage description... */
|
||||
MEMREMAP_WB = 1 << 0,
|
||||
MEMREMAP_WT = 1 << 1,
|
||||
MEMREMAP_WC = 1 << 2,
|
||||
};
|
||||
|
||||
void *memremap(resource_size_t offset, size_t size, unsigned long flags);
|
||||
|
29
include/linux/kcov.h
Normal file
29
include/linux/kcov.h
Normal file
@ -0,0 +1,29 @@
|
||||
#ifndef _LINUX_KCOV_H
|
||||
#define _LINUX_KCOV_H
|
||||
|
||||
#include <uapi/linux/kcov.h>
|
||||
|
||||
struct task_struct;
|
||||
|
||||
#ifdef CONFIG_KCOV
|
||||
|
||||
void kcov_task_init(struct task_struct *t);
|
||||
void kcov_task_exit(struct task_struct *t);
|
||||
|
||||
enum kcov_mode {
|
||||
/* Coverage collection is not enabled yet. */
|
||||
KCOV_MODE_DISABLED = 0,
|
||||
/*
|
||||
* Tracing coverage collection mode.
|
||||
* Covered PCs are collected in a per-task buffer.
|
||||
*/
|
||||
KCOV_MODE_TRACE = 1,
|
||||
};
|
||||
|
||||
#else
|
||||
|
||||
static inline void kcov_task_init(struct task_struct *t) {}
|
||||
static inline void kcov_task_exit(struct task_struct *t) {}
|
||||
|
||||
#endif /* CONFIG_KCOV */
|
||||
#endif /* _LINUX_KCOV_H */
|
@ -255,7 +255,7 @@ extern long (*panic_blink)(int state);
|
||||
__printf(1, 2)
|
||||
void panic(const char *fmt, ...)
|
||||
__noreturn __cold;
|
||||
void nmi_panic_self_stop(struct pt_regs *);
|
||||
void nmi_panic(struct pt_regs *regs, const char *msg);
|
||||
extern void oops_enter(void);
|
||||
extern void oops_exit(void);
|
||||
void print_oops_end_marker(void);
|
||||
@ -456,25 +456,6 @@ extern bool crash_kexec_post_notifiers;
|
||||
extern atomic_t panic_cpu;
|
||||
#define PANIC_CPU_INVALID -1
|
||||
|
||||
/*
|
||||
* A variant of panic() called from NMI context. We return if we've already
|
||||
* panicked on this CPU. If another CPU already panicked, loop in
|
||||
* nmi_panic_self_stop() which can provide architecture dependent code such
|
||||
* as saving register state for crash dump.
|
||||
*/
|
||||
#define nmi_panic(regs, fmt, ...) \
|
||||
do { \
|
||||
int old_cpu, cpu; \
|
||||
\
|
||||
cpu = raw_smp_processor_id(); \
|
||||
old_cpu = atomic_cmpxchg(&panic_cpu, PANIC_CPU_INVALID, cpu); \
|
||||
\
|
||||
if (old_cpu == PANIC_CPU_INVALID) \
|
||||
panic(fmt, ##__VA_ARGS__); \
|
||||
else if (old_cpu != cpu) \
|
||||
nmi_panic_self_stop(regs); \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
* Only to be used by arch init code. If the user over-wrote the default
|
||||
* CONFIG_PANIC_TIMEOUT, honor it.
|
||||
|
@ -401,7 +401,7 @@ __kfifo_int_must_check_helper( \
|
||||
((typeof(__tmp->type))__kfifo->data) : \
|
||||
(__tmp->buf) \
|
||||
)[__kfifo->in & __tmp->kfifo.mask] = \
|
||||
(typeof(*__tmp->type))__val; \
|
||||
*(typeof(__tmp->type))&__val; \
|
||||
smp_wmb(); \
|
||||
__kfifo->in++; \
|
||||
} \
|
||||
|
@ -137,6 +137,13 @@ struct rio_switch_ops {
|
||||
int (*em_handle) (struct rio_dev *dev, u8 swport);
|
||||
};
|
||||
|
||||
enum rio_device_state {
|
||||
RIO_DEVICE_INITIALIZING,
|
||||
RIO_DEVICE_RUNNING,
|
||||
RIO_DEVICE_GONE,
|
||||
RIO_DEVICE_SHUTDOWN,
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rio_dev - RIO device info
|
||||
* @global_list: Node in list of all RIO devices
|
||||
@ -165,6 +172,7 @@ struct rio_switch_ops {
|
||||
* @destid: Network destination ID (or associated destid for switch)
|
||||
* @hopcount: Hopcount to this device
|
||||
* @prev: Previous RIO device connected to the current one
|
||||
* @state: device state
|
||||
* @rswitch: struct rio_switch (if valid for this device)
|
||||
*/
|
||||
struct rio_dev {
|
||||
@ -194,6 +202,7 @@ struct rio_dev {
|
||||
u16 destid;
|
||||
u8 hopcount;
|
||||
struct rio_dev *prev;
|
||||
atomic_t state;
|
||||
struct rio_switch rswitch[0]; /* RIO switch info */
|
||||
};
|
||||
|
||||
@ -202,6 +211,7 @@ struct rio_dev {
|
||||
#define to_rio_dev(n) container_of(n, struct rio_dev, dev)
|
||||
#define sw_to_rio_dev(n) container_of(n, struct rio_dev, rswitch[0])
|
||||
#define to_rio_mport(n) container_of(n, struct rio_mport, dev)
|
||||
#define to_rio_net(n) container_of(n, struct rio_net, dev)
|
||||
|
||||
/**
|
||||
* struct rio_msg - RIO message event
|
||||
@ -235,8 +245,11 @@ enum rio_phy_type {
|
||||
/**
|
||||
* struct rio_mport - RIO master port info
|
||||
* @dbells: List of doorbell events
|
||||
* @pwrites: List of portwrite events
|
||||
* @node: Node in global list of master ports
|
||||
* @nnode: Node in network list of master ports
|
||||
* @net: RIO net this mport is attached to
|
||||
* @lock: lock to synchronize lists manipulations
|
||||
* @iores: I/O mem resource that this master port interface owns
|
||||
* @riores: RIO resources that this master port interfaces owns
|
||||
* @inb_msg: RIO inbound message event descriptors
|
||||
@ -253,11 +266,16 @@ enum rio_phy_type {
|
||||
* @priv: Master port private data
|
||||
* @dma: DMA device associated with mport
|
||||
* @nscan: RapidIO network enumeration/discovery operations
|
||||
* @state: mport device state
|
||||
* @pwe_refcnt: port-write enable ref counter to track enable/disable requests
|
||||
*/
|
||||
struct rio_mport {
|
||||
struct list_head dbells; /* list of doorbell events */
|
||||
struct list_head pwrites; /* list of portwrite events */
|
||||
struct list_head node; /* node in global list of ports */
|
||||
struct list_head nnode; /* node in net list of ports */
|
||||
struct rio_net *net; /* RIO net this mport is attached to */
|
||||
struct mutex lock;
|
||||
struct resource iores;
|
||||
struct resource riores[RIO_MAX_MPORT_RESOURCES];
|
||||
struct rio_msg inb_msg[RIO_MAX_MBOX];
|
||||
@ -280,20 +298,20 @@ struct rio_mport {
|
||||
struct dma_device dma;
|
||||
#endif
|
||||
struct rio_scan *nscan;
|
||||
atomic_t state;
|
||||
unsigned int pwe_refcnt;
|
||||
};
|
||||
|
||||
static inline int rio_mport_is_running(struct rio_mport *mport)
|
||||
{
|
||||
return atomic_read(&mport->state) == RIO_DEVICE_RUNNING;
|
||||
}
|
||||
|
||||
/*
|
||||
* Enumeration/discovery control flags
|
||||
*/
|
||||
#define RIO_SCAN_ENUM_NO_WAIT 0x00000001 /* Do not wait for enum completed */
|
||||
|
||||
struct rio_id_table {
|
||||
u16 start; /* logical minimal id */
|
||||
u32 max; /* max number of IDs in table */
|
||||
spinlock_t lock;
|
||||
unsigned long *table;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rio_net - RIO network info
|
||||
* @node: Node in global list of RIO networks
|
||||
@ -302,7 +320,9 @@ struct rio_id_table {
|
||||
* @mports: List of master ports accessing this network
|
||||
* @hport: Default port for accessing this network
|
||||
* @id: RIO network ID
|
||||
* @destid_table: destID allocation table
|
||||
* @dev: Device object
|
||||
* @enum_data: private data specific to a network enumerator
|
||||
* @release: enumerator-specific release callback
|
||||
*/
|
||||
struct rio_net {
|
||||
struct list_head node; /* node in list of networks */
|
||||
@ -311,7 +331,53 @@ struct rio_net {
|
||||
struct list_head mports; /* list of ports accessing net */
|
||||
struct rio_mport *hport; /* primary port for accessing net */
|
||||
unsigned char id; /* RIO network ID */
|
||||
struct rio_id_table destid_table; /* destID allocation table */
|
||||
struct device dev;
|
||||
void *enum_data; /* private data for enumerator of the network */
|
||||
void (*release)(struct rio_net *net);
|
||||
};
|
||||
|
||||
enum rio_link_speed {
|
||||
RIO_LINK_DOWN = 0, /* SRIO Link not initialized */
|
||||
RIO_LINK_125 = 1, /* 1.25 GBaud */
|
||||
RIO_LINK_250 = 2, /* 2.5 GBaud */
|
||||
RIO_LINK_312 = 3, /* 3.125 GBaud */
|
||||
RIO_LINK_500 = 4, /* 5.0 GBaud */
|
||||
RIO_LINK_625 = 5 /* 6.25 GBaud */
|
||||
};
|
||||
|
||||
enum rio_link_width {
|
||||
RIO_LINK_1X = 0,
|
||||
RIO_LINK_1XR = 1,
|
||||
RIO_LINK_2X = 3,
|
||||
RIO_LINK_4X = 2,
|
||||
RIO_LINK_8X = 4,
|
||||
RIO_LINK_16X = 5
|
||||
};
|
||||
|
||||
enum rio_mport_flags {
|
||||
RIO_MPORT_DMA = (1 << 0), /* supports DMA data transfers */
|
||||
RIO_MPORT_DMA_SG = (1 << 1), /* DMA supports HW SG mode */
|
||||
RIO_MPORT_IBSG = (1 << 2), /* inbound mapping supports SG */
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rio_mport_attr - RIO mport device attributes
|
||||
* @flags: mport device capability flags
|
||||
* @link_speed: SRIO link speed value (as defined by RapidIO specification)
|
||||
* @link_width: SRIO link width value (as defined by RapidIO specification)
|
||||
* @dma_max_sge: number of SG list entries that can be handled by DMA channel(s)
|
||||
* @dma_max_size: max number of bytes in single DMA transfer (SG entry)
|
||||
* @dma_align: alignment shift for DMA operations (as for other DMA operations)
|
||||
*/
|
||||
struct rio_mport_attr {
|
||||
int flags;
|
||||
int link_speed;
|
||||
int link_width;
|
||||
|
||||
/* DMA capability info: valid only if RIO_MPORT_DMA flag is set */
|
||||
int dma_max_sge;
|
||||
int dma_max_size;
|
||||
int dma_align;
|
||||
};
|
||||
|
||||
/* Low-level architecture-dependent routines */
|
||||
@ -333,6 +399,9 @@ struct rio_net {
|
||||
* @get_inb_message: Callback to get a message from an inbound mailbox queue.
|
||||
* @map_inb: Callback to map RapidIO address region into local memory space.
|
||||
* @unmap_inb: Callback to unmap RapidIO address region mapped with map_inb().
|
||||
* @query_mport: Callback to query mport device attributes.
|
||||
* @map_outb: Callback to map outbound address region into local memory space.
|
||||
* @unmap_outb: Callback to unmap outbound RapidIO address region.
|
||||
*/
|
||||
struct rio_ops {
|
||||
int (*lcread) (struct rio_mport *mport, int index, u32 offset, int len,
|
||||
@ -358,6 +427,11 @@ struct rio_ops {
|
||||
int (*map_inb)(struct rio_mport *mport, dma_addr_t lstart,
|
||||
u64 rstart, u32 size, u32 flags);
|
||||
void (*unmap_inb)(struct rio_mport *mport, dma_addr_t lstart);
|
||||
int (*query_mport)(struct rio_mport *mport,
|
||||
struct rio_mport_attr *attr);
|
||||
int (*map_outb)(struct rio_mport *mport, u16 destid, u64 rstart,
|
||||
u32 size, u32 flags, dma_addr_t *laddr);
|
||||
void (*unmap_outb)(struct rio_mport *mport, u16 destid, u64 rstart);
|
||||
};
|
||||
|
||||
#define RIO_RESOURCE_MEM 0x00000100
|
||||
@ -376,6 +450,7 @@ struct rio_ops {
|
||||
* @id_table: RIO device ids to be associated with this driver
|
||||
* @probe: RIO device inserted
|
||||
* @remove: RIO device removed
|
||||
* @shutdown: shutdown notification callback
|
||||
* @suspend: RIO device suspended
|
||||
* @resume: RIO device awakened
|
||||
* @enable_wake: RIO device enable wake event
|
||||
@ -390,6 +465,7 @@ struct rio_driver {
|
||||
const struct rio_device_id *id_table;
|
||||
int (*probe) (struct rio_dev * dev, const struct rio_device_id * id);
|
||||
void (*remove) (struct rio_dev * dev);
|
||||
void (*shutdown)(struct rio_dev *dev);
|
||||
int (*suspend) (struct rio_dev * dev, u32 state);
|
||||
int (*resume) (struct rio_dev * dev);
|
||||
int (*enable_wake) (struct rio_dev * dev, u32 state, int enable);
|
||||
@ -476,10 +552,14 @@ struct rio_scan_node {
|
||||
};
|
||||
|
||||
/* Architecture and hardware-specific functions */
|
||||
extern int rio_mport_initialize(struct rio_mport *);
|
||||
extern int rio_register_mport(struct rio_mport *);
|
||||
extern int rio_unregister_mport(struct rio_mport *);
|
||||
extern int rio_open_inb_mbox(struct rio_mport *, void *, int, int);
|
||||
extern void rio_close_inb_mbox(struct rio_mport *, int);
|
||||
extern int rio_open_outb_mbox(struct rio_mport *, void *, int, int);
|
||||
extern void rio_close_outb_mbox(struct rio_mport *, int);
|
||||
extern int rio_query_mport(struct rio_mport *port,
|
||||
struct rio_mport_attr *mport_attr);
|
||||
|
||||
#endif /* LINUX_RIO_H */
|
||||
|
@ -369,12 +369,24 @@ void rio_release_region(struct rio_dev *, int);
|
||||
extern int rio_map_inb_region(struct rio_mport *mport, dma_addr_t local,
|
||||
u64 rbase, u32 size, u32 rflags);
|
||||
extern void rio_unmap_inb_region(struct rio_mport *mport, dma_addr_t lstart);
|
||||
extern int rio_map_outb_region(struct rio_mport *mport, u16 destid, u64 rbase,
|
||||
u32 size, u32 rflags, dma_addr_t *local);
|
||||
extern void rio_unmap_outb_region(struct rio_mport *mport,
|
||||
u16 destid, u64 rstart);
|
||||
|
||||
/* Port-Write management */
|
||||
extern int rio_request_inb_pwrite(struct rio_dev *,
|
||||
int (*)(struct rio_dev *, union rio_pw_msg*, int));
|
||||
extern int rio_release_inb_pwrite(struct rio_dev *);
|
||||
extern int rio_inb_pwrite_handler(union rio_pw_msg *pw_msg);
|
||||
extern int rio_add_mport_pw_handler(struct rio_mport *mport, void *dev_id,
|
||||
int (*pwcback)(struct rio_mport *mport, void *dev_id,
|
||||
union rio_pw_msg *msg, int step));
|
||||
extern int rio_del_mport_pw_handler(struct rio_mport *mport, void *dev_id,
|
||||
int (*pwcback)(struct rio_mport *mport, void *dev_id,
|
||||
union rio_pw_msg *msg, int step));
|
||||
extern int rio_inb_pwrite_handler(struct rio_mport *mport,
|
||||
union rio_pw_msg *pw_msg);
|
||||
extern void rio_pw_enable(struct rio_mport *mport, int enable);
|
||||
|
||||
/* LDM support */
|
||||
int rio_register_driver(struct rio_driver *);
|
||||
@ -435,6 +447,7 @@ static inline void rio_set_drvdata(struct rio_dev *rdev, void *data)
|
||||
|
||||
/* Misc driver helpers */
|
||||
extern u16 rio_local_get_device_id(struct rio_mport *port);
|
||||
extern void rio_local_set_device_id(struct rio_mport *port, u16 did);
|
||||
extern struct rio_dev *rio_get_device(u16 vid, u16 did, struct rio_dev *from);
|
||||
extern struct rio_dev *rio_get_asm(u16 vid, u16 did, u16 asm_vid, u16 asm_did,
|
||||
struct rio_dev *from);
|
||||
|
271
include/linux/rio_mport_cdev.h
Normal file
271
include/linux/rio_mport_cdev.h
Normal file
@ -0,0 +1,271 @@
|
||||
/*
|
||||
* Copyright (c) 2015-2016, Integrated Device Technology Inc.
|
||||
* Copyright (c) 2015, Prodrive Technologies
|
||||
* Copyright (c) 2015, Texas Instruments Incorporated
|
||||
* Copyright (c) 2015, RapidIO Trade Association
|
||||
* All rights reserved.
|
||||
*
|
||||
* This software is available to you under a choice of one of two licenses.
|
||||
* You may choose to be licensed under the terms of the GNU General Public
|
||||
* License(GPL) Version 2, or the BSD-3 Clause license below:
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are met:
|
||||
*
|
||||
* 1. Redistributions of source code must retain the above copyright notice,
|
||||
* this list of conditions and the following disclaimer.
|
||||
*
|
||||
* 2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
* this list of conditions and the following disclaimer in the documentation
|
||||
* and/or other materials provided with the distribution.
|
||||
*
|
||||
* 3. Neither the name of the copyright holder nor the names of its contributors
|
||||
* may be used to endorse or promote products derived from this software without
|
||||
* specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
|
||||
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
|
||||
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
|
||||
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
|
||||
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
|
||||
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
|
||||
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
|
||||
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#ifndef _RIO_MPORT_CDEV_H_
|
||||
#define _RIO_MPORT_CDEV_H_
|
||||
|
||||
#ifndef __user
|
||||
#define __user
|
||||
#endif
|
||||
|
||||
struct rio_mport_maint_io {
|
||||
uint32_t rioid; /* destID of remote device */
|
||||
uint32_t hopcount; /* hopcount to remote device */
|
||||
uint32_t offset; /* offset in register space */
|
||||
size_t length; /* length in bytes */
|
||||
void __user *buffer; /* data buffer */
|
||||
};
|
||||
|
||||
/*
|
||||
* Definitions for RapidIO data transfers:
|
||||
* - memory mapped (MAPPED)
|
||||
* - packet generation from memory (TRANSFER)
|
||||
*/
|
||||
#define RIO_TRANSFER_MODE_MAPPED (1 << 0)
|
||||
#define RIO_TRANSFER_MODE_TRANSFER (1 << 1)
|
||||
#define RIO_CAP_DBL_SEND (1 << 2)
|
||||
#define RIO_CAP_DBL_RECV (1 << 3)
|
||||
#define RIO_CAP_PW_SEND (1 << 4)
|
||||
#define RIO_CAP_PW_RECV (1 << 5)
|
||||
#define RIO_CAP_MAP_OUTB (1 << 6)
|
||||
#define RIO_CAP_MAP_INB (1 << 7)
|
||||
|
||||
struct rio_mport_properties {
|
||||
uint16_t hdid;
|
||||
uint8_t id; /* Physical port ID */
|
||||
uint8_t index;
|
||||
uint32_t flags;
|
||||
uint32_t sys_size; /* Default addressing size */
|
||||
uint8_t port_ok;
|
||||
uint8_t link_speed;
|
||||
uint8_t link_width;
|
||||
uint32_t dma_max_sge;
|
||||
uint32_t dma_max_size;
|
||||
uint32_t dma_align;
|
||||
uint32_t transfer_mode; /* Default transfer mode */
|
||||
uint32_t cap_sys_size; /* Capable system sizes */
|
||||
uint32_t cap_addr_size; /* Capable addressing sizes */
|
||||
uint32_t cap_transfer_mode; /* Capable transfer modes */
|
||||
uint32_t cap_mport; /* Mport capabilities */
|
||||
};
|
||||
|
||||
/*
|
||||
* Definitions for RapidIO events;
|
||||
* - incoming port-writes
|
||||
* - incoming doorbells
|
||||
*/
|
||||
#define RIO_DOORBELL (1 << 0)
|
||||
#define RIO_PORTWRITE (1 << 1)
|
||||
|
||||
struct rio_doorbell {
|
||||
uint32_t rioid;
|
||||
uint16_t payload;
|
||||
};
|
||||
|
||||
struct rio_doorbell_filter {
|
||||
uint32_t rioid; /* 0xffffffff to match all ids */
|
||||
uint16_t low;
|
||||
uint16_t high;
|
||||
};
|
||||
|
||||
|
||||
struct rio_portwrite {
|
||||
uint32_t payload[16];
|
||||
};
|
||||
|
||||
struct rio_pw_filter {
|
||||
uint32_t mask;
|
||||
uint32_t low;
|
||||
uint32_t high;
|
||||
};
|
||||
|
||||
/* RapidIO base address for inbound requests set to value defined below
|
||||
* indicates that no specific RIO-to-local address translation is requested
|
||||
* and driver should use direct (one-to-one) address mapping.
|
||||
*/
|
||||
#define RIO_MAP_ANY_ADDR (uint64_t)(~((uint64_t) 0))
|
||||
|
||||
struct rio_mmap {
|
||||
uint32_t rioid;
|
||||
uint64_t rio_addr;
|
||||
uint64_t length;
|
||||
uint64_t handle;
|
||||
void *address;
|
||||
};
|
||||
|
||||
struct rio_dma_mem {
|
||||
uint64_t length; /* length of DMA memory */
|
||||
uint64_t dma_handle; /* handle associated with this memory */
|
||||
void *buffer; /* pointer to this memory */
|
||||
};
|
||||
|
||||
|
||||
struct rio_event {
|
||||
unsigned int header; /* event type RIO_DOORBELL or RIO_PORTWRITE */
|
||||
union {
|
||||
struct rio_doorbell doorbell; /* header for RIO_DOORBELL */
|
||||
struct rio_portwrite portwrite; /* header for RIO_PORTWRITE */
|
||||
} u;
|
||||
};
|
||||
|
||||
enum rio_transfer_sync {
|
||||
RIO_TRANSFER_SYNC, /* synchronous transfer */
|
||||
RIO_TRANSFER_ASYNC, /* asynchronous transfer */
|
||||
RIO_TRANSFER_FAF, /* fire-and-forget transfer */
|
||||
};
|
||||
|
||||
enum rio_transfer_dir {
|
||||
RIO_TRANSFER_DIR_READ, /* Read operation */
|
||||
RIO_TRANSFER_DIR_WRITE, /* Write operation */
|
||||
};
|
||||
|
||||
/*
|
||||
* RapidIO data exchange transactions are lists of individual transfers. Each
|
||||
* transfer exchanges data between two RapidIO devices by remote direct memory
|
||||
* access and has its own completion code.
|
||||
*
|
||||
* The RapidIO specification defines four types of data exchange requests:
|
||||
* NREAD, NWRITE, SWRITE and NWRITE_R. The RapidIO DMA channel interface allows
|
||||
* to specify the required type of write operation or combination of them when
|
||||
* only the last data packet requires response.
|
||||
*
|
||||
* NREAD: read up to 256 bytes from remote device memory into local memory
|
||||
* NWRITE: write up to 256 bytes from local memory to remote device memory
|
||||
* without confirmation
|
||||
* SWRITE: as NWRITE, but all addresses and payloads must be 64-bit aligned
|
||||
* NWRITE_R: as NWRITE, but expect acknowledgment from remote device.
|
||||
*
|
||||
* The default exchange is chosen from NREAD and any of the WRITE modes as the
|
||||
* driver sees fit. For write requests the user can explicitly choose between
|
||||
* any of the write modes for each transaction.
|
||||
*/
|
||||
enum rio_exchange {
|
||||
RIO_EXCHANGE_DEFAULT, /* Default method */
|
||||
RIO_EXCHANGE_NWRITE, /* All packets using NWRITE */
|
||||
RIO_EXCHANGE_SWRITE, /* All packets using SWRITE */
|
||||
RIO_EXCHANGE_NWRITE_R, /* Last packet NWRITE_R, others NWRITE */
|
||||
RIO_EXCHANGE_SWRITE_R, /* Last packet NWRITE_R, others SWRITE */
|
||||
RIO_EXCHANGE_NWRITE_R_ALL, /* All packets using NWRITE_R */
|
||||
};
|
||||
|
||||
struct rio_transfer_io {
|
||||
uint32_t rioid; /* Target destID */
|
||||
uint64_t rio_addr; /* Address in target's RIO mem space */
|
||||
enum rio_exchange method; /* Data exchange method */
|
||||
void __user *loc_addr;
|
||||
uint64_t handle;
|
||||
uint64_t offset; /* Offset in buffer */
|
||||
uint64_t length; /* Length in bytes */
|
||||
uint32_t completion_code; /* Completion code for this transfer */
|
||||
};
|
||||
|
||||
struct rio_transaction {
|
||||
uint32_t transfer_mode; /* Data transfer mode */
|
||||
enum rio_transfer_sync sync; /* Synchronization method */
|
||||
enum rio_transfer_dir dir; /* Transfer direction */
|
||||
size_t count; /* Number of transfers */
|
||||
struct rio_transfer_io __user *block; /* Array of <count> transfers */
|
||||
};
|
||||
|
||||
struct rio_async_tx_wait {
|
||||
uint32_t token; /* DMA transaction ID token */
|
||||
uint32_t timeout; /* Wait timeout in msec, if 0 use default TO */
|
||||
};
|
||||
|
||||
#define RIO_MAX_DEVNAME_SZ 20
|
||||
|
||||
struct rio_rdev_info {
|
||||
uint32_t destid;
|
||||
uint8_t hopcount;
|
||||
uint32_t comptag;
|
||||
char name[RIO_MAX_DEVNAME_SZ + 1];
|
||||
};
|
||||
|
||||
/* Driver IOCTL codes */
|
||||
#define RIO_MPORT_DRV_MAGIC 'm'
|
||||
|
||||
#define RIO_MPORT_MAINT_HDID_SET \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 1, uint16_t)
|
||||
#define RIO_MPORT_MAINT_COMPTAG_SET \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 2, uint32_t)
|
||||
#define RIO_MPORT_MAINT_PORT_IDX_GET \
|
||||
_IOR(RIO_MPORT_DRV_MAGIC, 3, uint32_t)
|
||||
#define RIO_MPORT_GET_PROPERTIES \
|
||||
_IOR(RIO_MPORT_DRV_MAGIC, 4, struct rio_mport_properties)
|
||||
#define RIO_MPORT_MAINT_READ_LOCAL \
|
||||
_IOR(RIO_MPORT_DRV_MAGIC, 5, struct rio_mport_maint_io)
|
||||
#define RIO_MPORT_MAINT_WRITE_LOCAL \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 6, struct rio_mport_maint_io)
|
||||
#define RIO_MPORT_MAINT_READ_REMOTE \
|
||||
_IOR(RIO_MPORT_DRV_MAGIC, 7, struct rio_mport_maint_io)
|
||||
#define RIO_MPORT_MAINT_WRITE_REMOTE \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 8, struct rio_mport_maint_io)
|
||||
#define RIO_ENABLE_DOORBELL_RANGE \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 9, struct rio_doorbell_filter)
|
||||
#define RIO_DISABLE_DOORBELL_RANGE \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 10, struct rio_doorbell_filter)
|
||||
#define RIO_ENABLE_PORTWRITE_RANGE \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 11, struct rio_pw_filter)
|
||||
#define RIO_DISABLE_PORTWRITE_RANGE \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 12, struct rio_pw_filter)
|
||||
#define RIO_SET_EVENT_MASK \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 13, unsigned int)
|
||||
#define RIO_GET_EVENT_MASK \
|
||||
_IOR(RIO_MPORT_DRV_MAGIC, 14, unsigned int)
|
||||
#define RIO_MAP_OUTBOUND \
|
||||
_IOWR(RIO_MPORT_DRV_MAGIC, 15, struct rio_mmap)
|
||||
#define RIO_UNMAP_OUTBOUND \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 16, struct rio_mmap)
|
||||
#define RIO_MAP_INBOUND \
|
||||
_IOWR(RIO_MPORT_DRV_MAGIC, 17, struct rio_mmap)
|
||||
#define RIO_UNMAP_INBOUND \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 18, uint64_t)
|
||||
#define RIO_ALLOC_DMA \
|
||||
_IOWR(RIO_MPORT_DRV_MAGIC, 19, struct rio_dma_mem)
|
||||
#define RIO_FREE_DMA \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 20, uint64_t)
|
||||
#define RIO_TRANSFER \
|
||||
_IOWR(RIO_MPORT_DRV_MAGIC, 21, struct rio_transaction)
|
||||
#define RIO_WAIT_FOR_ASYNC \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 22, struct rio_async_tx_wait)
|
||||
#define RIO_DEV_ADD \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 23, struct rio_rdev_info)
|
||||
#define RIO_DEV_DEL \
|
||||
_IOW(RIO_MPORT_DRV_MAGIC, 24, struct rio_rdev_info)
|
||||
|
||||
#endif /* _RIO_MPORT_CDEV_H_ */
|
@ -238,6 +238,8 @@
|
||||
#define RIO_PORT_N_ACK_INBOUND 0x3f000000
|
||||
#define RIO_PORT_N_ACK_OUTSTAND 0x00003f00
|
||||
#define RIO_PORT_N_ACK_OUTBOUND 0x0000003f
|
||||
#define RIO_PORT_N_CTL2_CSR(x) (0x0054 + x*0x20)
|
||||
#define RIO_PORT_N_CTL2_SEL_BAUD 0xf0000000
|
||||
#define RIO_PORT_N_ERR_STS_CSR(x) (0x0058 + x*0x20)
|
||||
#define RIO_PORT_N_ERR_STS_PW_OUT_ES 0x00010000 /* Output Error-stopped */
|
||||
#define RIO_PORT_N_ERR_STS_PW_INP_ES 0x00000100 /* Input Error-stopped */
|
||||
@ -249,6 +251,7 @@
|
||||
#define RIO_PORT_N_CTL_PWIDTH 0xc0000000
|
||||
#define RIO_PORT_N_CTL_PWIDTH_1 0x00000000
|
||||
#define RIO_PORT_N_CTL_PWIDTH_4 0x40000000
|
||||
#define RIO_PORT_N_CTL_IPW 0x38000000 /* Initialized Port Width */
|
||||
#define RIO_PORT_N_CTL_P_TYP_SER 0x00000001
|
||||
#define RIO_PORT_N_CTL_LOCKOUT 0x00000002
|
||||
#define RIO_PORT_N_CTL_EN_RX_SER 0x00200000
|
||||
|
@ -51,6 +51,7 @@ struct sched_param {
|
||||
#include <linux/resource.h>
|
||||
#include <linux/timer.h>
|
||||
#include <linux/hrtimer.h>
|
||||
#include <linux/kcov.h>
|
||||
#include <linux/task_io_accounting.h>
|
||||
#include <linux/latencytop.h>
|
||||
#include <linux/cred.h>
|
||||
@ -1818,6 +1819,16 @@ struct task_struct {
|
||||
/* bitmask and counter of trace recursion */
|
||||
unsigned long trace_recursion;
|
||||
#endif /* CONFIG_TRACING */
|
||||
#ifdef CONFIG_KCOV
|
||||
/* Coverage collection mode enabled for this task (0 if disabled). */
|
||||
enum kcov_mode kcov_mode;
|
||||
/* Size of the kcov_area. */
|
||||
unsigned kcov_size;
|
||||
/* Buffer for coverage collection. */
|
||||
void *kcov_area;
|
||||
/* kcov desciptor wired with this task or NULL. */
|
||||
struct kcov *kcov;
|
||||
#endif
|
||||
#ifdef CONFIG_MEMCG
|
||||
struct mem_cgroup *memcg_in_oom;
|
||||
gfp_t memcg_oom_gfp_mask;
|
||||
|
@ -354,6 +354,7 @@ header-y += reiserfs_fs.h
|
||||
header-y += reiserfs_xattr.h
|
||||
header-y += resource.h
|
||||
header-y += rfkill.h
|
||||
header-y += rio_mport_cdev.h
|
||||
header-y += romfs_fs.h
|
||||
header-y += rose.h
|
||||
header-y += route.h
|
||||
|
10
include/uapi/linux/kcov.h
Normal file
10
include/uapi/linux/kcov.h
Normal file
@ -0,0 +1,10 @@
|
||||
#ifndef _LINUX_KCOV_IOCTLS_H
|
||||
#define _LINUX_KCOV_IOCTLS_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
|
||||
#define KCOV_ENABLE _IO('c', 100)
|
||||
#define KCOV_DISABLE _IO('c', 101)
|
||||
|
||||
#endif /* _LINUX_KCOV_IOCTLS_H */
|
13
ipc/sem.c
13
ipc/sem.c
@ -92,7 +92,14 @@
|
||||
/* One semaphore structure for each semaphore in the system. */
|
||||
struct sem {
|
||||
int semval; /* current value */
|
||||
int sempid; /* pid of last operation */
|
||||
/*
|
||||
* PID of the process that last modified the semaphore. For
|
||||
* Linux, specifically these are:
|
||||
* - semop
|
||||
* - semctl, via SETVAL and SETALL.
|
||||
* - at task exit when performing undo adjustments (see exit_sem).
|
||||
*/
|
||||
int sempid;
|
||||
spinlock_t lock; /* spinlock for fine-grained semtimedop */
|
||||
struct list_head pending_alter; /* pending single-sop operations */
|
||||
/* that alter the semaphore */
|
||||
@ -1444,8 +1451,10 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum,
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
for (i = 0; i < nsems; i++)
|
||||
for (i = 0; i < nsems; i++) {
|
||||
sma->sem_base[i].semval = sem_io[i];
|
||||
sma->sem_base[i].sempid = task_tgid_vnr(current);
|
||||
}
|
||||
|
||||
ipc_assert_locked_object(&sma->sem_perm);
|
||||
list_for_each_entry(un, &sma->list_id, list_id) {
|
||||
|
@ -18,6 +18,17 @@ ifdef CONFIG_FUNCTION_TRACER
|
||||
CFLAGS_REMOVE_irq_work.o = $(CC_FLAGS_FTRACE)
|
||||
endif
|
||||
|
||||
# Prevents flicker of uninteresting __do_softirq()/__local_bh_disable_ip()
|
||||
# in coverage traces.
|
||||
KCOV_INSTRUMENT_softirq.o := n
|
||||
# These are called from save_stack_trace() on slub debug path,
|
||||
# and produce insane amounts of uninteresting coverage.
|
||||
KCOV_INSTRUMENT_module.o := n
|
||||
KCOV_INSTRUMENT_extable.o := n
|
||||
# Don't self-instrument.
|
||||
KCOV_INSTRUMENT_kcov.o := n
|
||||
KASAN_SANITIZE_kcov.o := n
|
||||
|
||||
# cond_syscall is currently not LTO compatible
|
||||
CFLAGS_sys_ni.o = $(DISABLE_LTO)
|
||||
|
||||
@ -68,6 +79,7 @@ obj-$(CONFIG_AUDITSYSCALL) += auditsc.o
|
||||
obj-$(CONFIG_AUDIT_WATCH) += audit_watch.o audit_fsnotify.o
|
||||
obj-$(CONFIG_AUDIT_TREE) += audit_tree.o
|
||||
obj-$(CONFIG_GCOV_KERNEL) += gcov/
|
||||
obj-$(CONFIG_KCOV) += kcov.o
|
||||
obj-$(CONFIG_KPROBES) += kprobes.o
|
||||
obj-$(CONFIG_KGDB) += debug/
|
||||
obj-$(CONFIG_DETECT_HUNG_TASK) += hung_task.o
|
||||
|
@ -2412,8 +2412,8 @@ void __audit_seccomp(unsigned long syscall, long signr, int code)
|
||||
return;
|
||||
audit_log_task(ab);
|
||||
audit_log_format(ab, " sig=%ld arch=%x syscall=%ld compat=%d ip=0x%lx code=0x%x",
|
||||
signr, syscall_get_arch(), syscall, is_compat_task(),
|
||||
KSTK_EIP(current), code);
|
||||
signr, syscall_get_arch(), syscall,
|
||||
in_compat_syscall(), KSTK_EIP(current), code);
|
||||
audit_log_end(ab);
|
||||
}
|
||||
|
||||
|
@ -53,6 +53,7 @@
|
||||
#include <linux/oom.h>
|
||||
#include <linux/writeback.h>
|
||||
#include <linux/shm.h>
|
||||
#include <linux/kcov.h>
|
||||
|
||||
#include <asm/uaccess.h>
|
||||
#include <asm/unistd.h>
|
||||
@ -655,6 +656,7 @@ void do_exit(long code)
|
||||
TASKS_RCU(int tasks_rcu_i);
|
||||
|
||||
profile_task_exit(tsk);
|
||||
kcov_task_exit(tsk);
|
||||
|
||||
WARN_ON(blk_needs_flush_plug(tsk));
|
||||
|
||||
|
@ -75,6 +75,7 @@
|
||||
#include <linux/aio.h>
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/sysctl.h>
|
||||
#include <linux/kcov.h>
|
||||
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/pgalloc.h>
|
||||
@ -392,6 +393,8 @@ static struct task_struct *dup_task_struct(struct task_struct *orig)
|
||||
|
||||
account_kernel_stack(ti, 1);
|
||||
|
||||
kcov_task_init(tsk);
|
||||
|
||||
return tsk;
|
||||
|
||||
free_ti:
|
||||
|
@ -185,10 +185,12 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static unsigned long timeout_jiffies(unsigned long timeout)
|
||||
static long hung_timeout_jiffies(unsigned long last_checked,
|
||||
unsigned long timeout)
|
||||
{
|
||||
/* timeout of 0 will disable the watchdog */
|
||||
return timeout ? timeout * HZ : MAX_SCHEDULE_TIMEOUT;
|
||||
return timeout ? last_checked - jiffies + timeout * HZ :
|
||||
MAX_SCHEDULE_TIMEOUT;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -224,18 +226,21 @@ EXPORT_SYMBOL_GPL(reset_hung_task_detector);
|
||||
*/
|
||||
static int watchdog(void *dummy)
|
||||
{
|
||||
unsigned long hung_last_checked = jiffies;
|
||||
|
||||
set_user_nice(current, 0);
|
||||
|
||||
for ( ; ; ) {
|
||||
unsigned long timeout = sysctl_hung_task_timeout_secs;
|
||||
long t = hung_timeout_jiffies(hung_last_checked, timeout);
|
||||
|
||||
while (schedule_timeout_interruptible(timeout_jiffies(timeout)))
|
||||
timeout = sysctl_hung_task_timeout_secs;
|
||||
|
||||
if (atomic_xchg(&reset_hung_task, 0))
|
||||
if (t <= 0) {
|
||||
if (!atomic_xchg(&reset_hung_task, 0))
|
||||
check_hung_uninterruptible_tasks(timeout);
|
||||
hung_last_checked = jiffies;
|
||||
continue;
|
||||
|
||||
check_hung_uninterruptible_tasks(timeout);
|
||||
}
|
||||
schedule_timeout_interruptible(t);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -1322,8 +1322,8 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
|
||||
|
||||
if (nmsk != omsk)
|
||||
/* hope the handler works with current trigger mode */
|
||||
pr_warning("irq %d uses trigger mode %u; requested %u\n",
|
||||
irq, nmsk, omsk);
|
||||
pr_warn("irq %d uses trigger mode %u; requested %u\n",
|
||||
irq, nmsk, omsk);
|
||||
}
|
||||
|
||||
*old_ptr = new;
|
||||
|
273
kernel/kcov.c
Normal file
273
kernel/kcov.c
Normal file
@ -0,0 +1,273 @@
|
||||
#define pr_fmt(fmt) "kcov: " fmt
|
||||
|
||||
#include <linux/compiler.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/mm.h>
|
||||
#include <linux/printk.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/kcov.h>
|
||||
|
||||
/*
|
||||
* kcov descriptor (one per opened debugfs file).
|
||||
* State transitions of the descriptor:
|
||||
* - initial state after open()
|
||||
* - then there must be a single ioctl(KCOV_INIT_TRACE) call
|
||||
* - then, mmap() call (several calls are allowed but not useful)
|
||||
* - then, repeated enable/disable for a task (only one task a time allowed)
|
||||
*/
|
||||
struct kcov {
|
||||
/*
|
||||
* Reference counter. We keep one for:
|
||||
* - opened file descriptor
|
||||
* - task with enabled coverage (we can't unwire it from another task)
|
||||
*/
|
||||
atomic_t refcount;
|
||||
/* The lock protects mode, size, area and t. */
|
||||
spinlock_t lock;
|
||||
enum kcov_mode mode;
|
||||
/* Size of arena (in long's for KCOV_MODE_TRACE). */
|
||||
unsigned size;
|
||||
/* Coverage buffer shared with user space. */
|
||||
void *area;
|
||||
/* Task for which we collect coverage, or NULL. */
|
||||
struct task_struct *t;
|
||||
};
|
||||
|
||||
/*
|
||||
* Entry point from instrumented code.
|
||||
* This is called once per basic-block/edge.
|
||||
*/
|
||||
void __sanitizer_cov_trace_pc(void)
|
||||
{
|
||||
struct task_struct *t;
|
||||
enum kcov_mode mode;
|
||||
|
||||
t = current;
|
||||
/*
|
||||
* We are interested in code coverage as a function of a syscall inputs,
|
||||
* so we ignore code executed in interrupts.
|
||||
*/
|
||||
if (!t || in_interrupt())
|
||||
return;
|
||||
mode = READ_ONCE(t->kcov_mode);
|
||||
if (mode == KCOV_MODE_TRACE) {
|
||||
unsigned long *area;
|
||||
unsigned long pos;
|
||||
|
||||
/*
|
||||
* There is some code that runs in interrupts but for which
|
||||
* in_interrupt() returns false (e.g. preempt_schedule_irq()).
|
||||
* READ_ONCE()/barrier() effectively provides load-acquire wrt
|
||||
* interrupts, there are paired barrier()/WRITE_ONCE() in
|
||||
* kcov_ioctl_locked().
|
||||
*/
|
||||
barrier();
|
||||
area = t->kcov_area;
|
||||
/* The first word is number of subsequent PCs. */
|
||||
pos = READ_ONCE(area[0]) + 1;
|
||||
if (likely(pos < t->kcov_size)) {
|
||||
area[pos] = _RET_IP_;
|
||||
WRITE_ONCE(area[0], pos);
|
||||
}
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
|
||||
|
||||
static void kcov_get(struct kcov *kcov)
|
||||
{
|
||||
atomic_inc(&kcov->refcount);
|
||||
}
|
||||
|
||||
static void kcov_put(struct kcov *kcov)
|
||||
{
|
||||
if (atomic_dec_and_test(&kcov->refcount)) {
|
||||
vfree(kcov->area);
|
||||
kfree(kcov);
|
||||
}
|
||||
}
|
||||
|
||||
void kcov_task_init(struct task_struct *t)
|
||||
{
|
||||
t->kcov_mode = KCOV_MODE_DISABLED;
|
||||
t->kcov_size = 0;
|
||||
t->kcov_area = NULL;
|
||||
t->kcov = NULL;
|
||||
}
|
||||
|
||||
void kcov_task_exit(struct task_struct *t)
|
||||
{
|
||||
struct kcov *kcov;
|
||||
|
||||
kcov = t->kcov;
|
||||
if (kcov == NULL)
|
||||
return;
|
||||
spin_lock(&kcov->lock);
|
||||
if (WARN_ON(kcov->t != t)) {
|
||||
spin_unlock(&kcov->lock);
|
||||
return;
|
||||
}
|
||||
/* Just to not leave dangling references behind. */
|
||||
kcov_task_init(t);
|
||||
kcov->t = NULL;
|
||||
spin_unlock(&kcov->lock);
|
||||
kcov_put(kcov);
|
||||
}
|
||||
|
||||
static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
|
||||
{
|
||||
int res = 0;
|
||||
void *area;
|
||||
struct kcov *kcov = vma->vm_file->private_data;
|
||||
unsigned long size, off;
|
||||
struct page *page;
|
||||
|
||||
area = vmalloc_user(vma->vm_end - vma->vm_start);
|
||||
if (!area)
|
||||
return -ENOMEM;
|
||||
|
||||
spin_lock(&kcov->lock);
|
||||
size = kcov->size * sizeof(unsigned long);
|
||||
if (kcov->mode == KCOV_MODE_DISABLED || vma->vm_pgoff != 0 ||
|
||||
vma->vm_end - vma->vm_start != size) {
|
||||
res = -EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
if (!kcov->area) {
|
||||
kcov->area = area;
|
||||
vma->vm_flags |= VM_DONTEXPAND;
|
||||
spin_unlock(&kcov->lock);
|
||||
for (off = 0; off < size; off += PAGE_SIZE) {
|
||||
page = vmalloc_to_page(kcov->area + off);
|
||||
if (vm_insert_page(vma, vma->vm_start + off, page))
|
||||
WARN_ONCE(1, "vm_insert_page() failed");
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
exit:
|
||||
spin_unlock(&kcov->lock);
|
||||
vfree(area);
|
||||
return res;
|
||||
}
|
||||
|
||||
static int kcov_open(struct inode *inode, struct file *filep)
|
||||
{
|
||||
struct kcov *kcov;
|
||||
|
||||
kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
|
||||
if (!kcov)
|
||||
return -ENOMEM;
|
||||
atomic_set(&kcov->refcount, 1);
|
||||
spin_lock_init(&kcov->lock);
|
||||
filep->private_data = kcov;
|
||||
return nonseekable_open(inode, filep);
|
||||
}
|
||||
|
||||
static int kcov_close(struct inode *inode, struct file *filep)
|
||||
{
|
||||
kcov_put(filep->private_data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
|
||||
unsigned long arg)
|
||||
{
|
||||
struct task_struct *t;
|
||||
unsigned long size, unused;
|
||||
|
||||
switch (cmd) {
|
||||
case KCOV_INIT_TRACE:
|
||||
/*
|
||||
* Enable kcov in trace mode and setup buffer size.
|
||||
* Must happen before anything else.
|
||||
*/
|
||||
if (kcov->mode != KCOV_MODE_DISABLED)
|
||||
return -EBUSY;
|
||||
/*
|
||||
* Size must be at least 2 to hold current position and one PC.
|
||||
* Later we allocate size * sizeof(unsigned long) memory,
|
||||
* that must not overflow.
|
||||
*/
|
||||
size = arg;
|
||||
if (size < 2 || size > INT_MAX / sizeof(unsigned long))
|
||||
return -EINVAL;
|
||||
kcov->size = size;
|
||||
kcov->mode = KCOV_MODE_TRACE;
|
||||
return 0;
|
||||
case KCOV_ENABLE:
|
||||
/*
|
||||
* Enable coverage for the current task.
|
||||
* At this point user must have been enabled trace mode,
|
||||
* and mmapped the file. Coverage collection is disabled only
|
||||
* at task exit or voluntary by KCOV_DISABLE. After that it can
|
||||
* be enabled for another task.
|
||||
*/
|
||||
unused = arg;
|
||||
if (unused != 0 || kcov->mode == KCOV_MODE_DISABLED ||
|
||||
kcov->area == NULL)
|
||||
return -EINVAL;
|
||||
if (kcov->t != NULL)
|
||||
return -EBUSY;
|
||||
t = current;
|
||||
/* Cache in task struct for performance. */
|
||||
t->kcov_size = kcov->size;
|
||||
t->kcov_area = kcov->area;
|
||||
/* See comment in __sanitizer_cov_trace_pc(). */
|
||||
barrier();
|
||||
WRITE_ONCE(t->kcov_mode, kcov->mode);
|
||||
t->kcov = kcov;
|
||||
kcov->t = t;
|
||||
/* This is put either in kcov_task_exit() or in KCOV_DISABLE. */
|
||||
kcov_get(kcov);
|
||||
return 0;
|
||||
case KCOV_DISABLE:
|
||||
/* Disable coverage for the current task. */
|
||||
unused = arg;
|
||||
if (unused != 0 || current->kcov != kcov)
|
||||
return -EINVAL;
|
||||
t = current;
|
||||
if (WARN_ON(kcov->t != t))
|
||||
return -EINVAL;
|
||||
kcov_task_init(t);
|
||||
kcov->t = NULL;
|
||||
kcov_put(kcov);
|
||||
return 0;
|
||||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
}
|
||||
|
||||
static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct kcov *kcov;
|
||||
int res;
|
||||
|
||||
kcov = filep->private_data;
|
||||
spin_lock(&kcov->lock);
|
||||
res = kcov_ioctl_locked(kcov, cmd, arg);
|
||||
spin_unlock(&kcov->lock);
|
||||
return res;
|
||||
}
|
||||
|
||||
static const struct file_operations kcov_fops = {
|
||||
.open = kcov_open,
|
||||
.unlocked_ioctl = kcov_ioctl,
|
||||
.mmap = kcov_mmap,
|
||||
.release = kcov_close,
|
||||
};
|
||||
|
||||
static int __init kcov_init(void)
|
||||
{
|
||||
if (!debugfs_create_file("kcov", 0600, NULL, NULL, &kcov_fops)) {
|
||||
pr_err("failed to create kcov in debugfs\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
device_initcall(kcov_init);
|
@ -1,3 +1,6 @@
|
||||
# Any varying coverage in these files is non-deterministic
|
||||
# and is generally not a function of system call inputs.
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
|
||||
|
||||
|
@ -41,11 +41,13 @@ static void *try_ram_remap(resource_size_t offset, size_t size)
|
||||
* memremap() - remap an iomem_resource as cacheable memory
|
||||
* @offset: iomem resource start address
|
||||
* @size: size of remap
|
||||
* @flags: either MEMREMAP_WB or MEMREMAP_WT
|
||||
* @flags: any of MEMREMAP_WB, MEMREMAP_WT and MEMREMAP_WC
|
||||
*
|
||||
* memremap() is "ioremap" for cases where it is known that the resource
|
||||
* being mapped does not have i/o side effects and the __iomem
|
||||
* annotation is not applicable.
|
||||
* annotation is not applicable. In the case of multiple flags, the different
|
||||
* mapping types will be attempted in the order listed below until one of
|
||||
* them succeeds.
|
||||
*
|
||||
* MEMREMAP_WB - matches the default mapping for System RAM on
|
||||
* the architecture. This is usually a read-allocate write-back cache.
|
||||
@ -57,6 +59,10 @@ static void *try_ram_remap(resource_size_t offset, size_t size)
|
||||
* cache or are written through to memory and never exist in a
|
||||
* cache-dirty state with respect to program visibility. Attempts to
|
||||
* map System RAM with this mapping type will fail.
|
||||
*
|
||||
* MEMREMAP_WC - establish a writecombine mapping, whereby writes may
|
||||
* be coalesced together (e.g. in the CPU's write buffers), but is otherwise
|
||||
* uncached. Attempts to map System RAM with this mapping type will fail.
|
||||
*/
|
||||
void *memremap(resource_size_t offset, size_t size, unsigned long flags)
|
||||
{
|
||||
@ -64,6 +70,9 @@ void *memremap(resource_size_t offset, size_t size, unsigned long flags)
|
||||
IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE);
|
||||
void *addr = NULL;
|
||||
|
||||
if (!flags)
|
||||
return NULL;
|
||||
|
||||
if (is_ram == REGION_MIXED) {
|
||||
WARN_ONCE(1, "memremap attempted on mixed range %pa size: %#lx\n",
|
||||
&offset, (unsigned long) size);
|
||||
@ -72,7 +81,6 @@ void *memremap(resource_size_t offset, size_t size, unsigned long flags)
|
||||
|
||||
/* Try all mapping types requested until one returns non-NULL */
|
||||
if (flags & MEMREMAP_WB) {
|
||||
flags &= ~MEMREMAP_WB;
|
||||
/*
|
||||
* MEMREMAP_WB is special in that it can be satisifed
|
||||
* from the direct map. Some archs depend on the
|
||||
@ -86,21 +94,22 @@ void *memremap(resource_size_t offset, size_t size, unsigned long flags)
|
||||
}
|
||||
|
||||
/*
|
||||
* If we don't have a mapping yet and more request flags are
|
||||
* pending then we will be attempting to establish a new virtual
|
||||
* If we don't have a mapping yet and other request flags are
|
||||
* present then we will be attempting to establish a new virtual
|
||||
* address mapping. Enforce that this mapping is not aliasing
|
||||
* System RAM.
|
||||
*/
|
||||
if (!addr && is_ram == REGION_INTERSECTS && flags) {
|
||||
if (!addr && is_ram == REGION_INTERSECTS && flags != MEMREMAP_WB) {
|
||||
WARN_ONCE(1, "memremap attempted on ram %pa size: %#lx\n",
|
||||
&offset, (unsigned long) size);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (!addr && (flags & MEMREMAP_WT)) {
|
||||
flags &= ~MEMREMAP_WT;
|
||||
if (!addr && (flags & MEMREMAP_WT))
|
||||
addr = ioremap_wt(offset, size);
|
||||
}
|
||||
|
||||
if (!addr && (flags & MEMREMAP_WC))
|
||||
addr = ioremap_wc(offset, size);
|
||||
|
||||
return addr;
|
||||
}
|
||||
|
@ -73,6 +73,26 @@ void __weak nmi_panic_self_stop(struct pt_regs *regs)
|
||||
|
||||
atomic_t panic_cpu = ATOMIC_INIT(PANIC_CPU_INVALID);
|
||||
|
||||
/*
|
||||
* A variant of panic() called from NMI context. We return if we've already
|
||||
* panicked on this CPU. If another CPU already panicked, loop in
|
||||
* nmi_panic_self_stop() which can provide architecture dependent code such
|
||||
* as saving register state for crash dump.
|
||||
*/
|
||||
void nmi_panic(struct pt_regs *regs, const char *msg)
|
||||
{
|
||||
int old_cpu, cpu;
|
||||
|
||||
cpu = raw_smp_processor_id();
|
||||
old_cpu = atomic_cmpxchg(&panic_cpu, PANIC_CPU_INVALID, cpu);
|
||||
|
||||
if (old_cpu == PANIC_CPU_INVALID)
|
||||
panic("%s", msg);
|
||||
else if (old_cpu != cpu)
|
||||
nmi_panic_self_stop(regs);
|
||||
}
|
||||
EXPORT_SYMBOL(nmi_panic);
|
||||
|
||||
/**
|
||||
* panic - halt the system
|
||||
* @fmt: The text string to print
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user