2660 Commits

Author SHA1 Message Date
46fa516869 dmaengine: jz4780: Fix up dmaengine API function prototypes
Several function prototypes did not match the dmaengine API they were
implementing, resulting in build warnings. Correct these.

Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-18 22:28:49 +05:30
ac9bd0ef5d dmaengine: sirf: clear pending DMA interrupt when DMA terminates
If DMA interrupt comes and is latched by IRQ controller during the
execution of dma_terminate_all(), dma_irq routine will be executed
after dma terminated, and it will cause kernel panic.
We clear DMA interrupts in dma_terminate_all() to avoid this useless
interrupt.

Signed-off-by: Yanchang Li <Yanchang.Li@csr.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-18 22:13:59 +05:30
e5f4ae84be dmaengine: add driver for lpc18xx dmamux
Add support for DMA on NXP LPC18xx/43xx platforms which has
a multiplexer in front of the PL080 dma request lines.

The mux is a single register in the LPC18xx/43xx CREG block
and can multiplex up to 4 request lines to each of the 16
lines on the PL080.

Signed-off-by: Joachim Eastwood <manabian@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-18 22:12:14 +05:30
aa4734da66 dmaengine: pl08x: support dt channel assignment
Add support for assigning DMA channels from a device tree.

[je: remove channel sub-node parsing, dynamic channel creation on xlate]

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Joachim Eastwood <manabian@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-18 22:12:14 +05:30
4a736d156d dmaengine: pxa_dma: fix debug information
This fixes the following error:
drivers/dma/pxa_dma.c: In function ‘dbg_show_requester_chan’:
drivers/dma/pxa_dma.c:192:2: error: void value not ignored as it ought to be
  pos += seq_printf(s, "DMA channel %d requester :\n", phy->idx);
  ^
drivers/dma/pxa_dma.c:197:8: error: void value not ignored as it ought to be
        !!(drcmr & DRCMR_MAPVLD));
        ^
scripts/Makefile.build:258: recipe for target 'drivers/dma/pxa_dma.o' failed

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-18 22:10:09 +05:30
05aa1a77dc dmaengine: fix balance of privatecnt inc/dec operations
This patch increments privatecnt value and set DMA_PRIVATE in device
caps in dma_request_slave_channel() function. This is needed to keep
privatecnt increment/decrement balance.

As function dma_release_channel() decrements privatecnt counter, we need
to increment it when channel is requested. Otherwise privatecnt drops
into negatives after few dma_release_channel() calls.

Reported-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Robert Baldyga <r.baldyga@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 22:47:43 +05:30
0e95fb9ceb dmaengine: pxa_dma: don't use config direction parameter
Don't use the direction passed in the configuration, and rely on each
transfer's direction to prepare the transfers. This will enable
future removal of direction parameter from dma_slave_config.

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 14:52:09 +05:30
09659a5978 dmaengine: ioatdma: Clean up IOAT_COMPLETION_PENDING flag
IOAT_COMPLETION_PENDING flag was deprecated for v2 and v3 drivers but was
not cleaned up. Doing that now. The commit deprecated this flag was
4dec23d7 ioatdma: fix race between updating ioat->head and
IOAT_COMPLETION_PENDING.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:31 +05:30
c7b0e8d7b5 dmaengine: ioatdma: fixup kernel doc errors from dma.h
./scripts/kerne-doc is reporting errors on dma.h. Clean up all reported
errors.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:31 +05:30
ef97bd0f59 dmanegine: ioatdma: remove function ptrs in ioatdma_device
Since we are a "single" device driver now we no longer require the function
pointers in ioatdma_device. Remove.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
3372de5813 dmaengine: ioatdma: removal of dma_v3.c and relevant ioat3 references
Moving the relevant functions to their respective .c files and removal of
dma_v3.c file. Also removed various ioat3 references when appropriate.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
599d49de7f dmaengine: ioatdma: move dma prep functions to single location
Move all DMA descriptor prepping functions to prep.c file. Fixup all
broken bits caused by the move.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
c0f28ce66e dmaengine: ioatdma: move all the init routines
Moving all the init routines to init.c and fixup anything broken during
the move.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
80b1973659 dmaengine: ioatdma: move all sysfs related code
Move and fixup all sysfs related bits to sysfs.c file.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
885b201056 dmaengine: ioatdma: remove dma_v2.*
Clean out dma_v2 and remove ioat2 calls since we are moving everything
to just ioat.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
55f878ec47 dmaengine: ioatdma: fixup ioatdma_device namings
Changing the variable names for ioatdma_device to be consistently named
ioat_dma instead of device/dma in order to avoid confusion and distinct
from struct device. This will clearly indicate that it is an
ioatdma_device. This also make all the naming consistent that the dma
device is ioat_dma and all the channels are ioat_chan.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
5a976888c9 dmaengine: ioatdma: clean up local dma channel data structure
Kill the common ioatdma channel structure and everything that is not
dma_chan to be ioat_dma_chan. Since we don't have to worry about v1
and v2 ioatdma anymore this makes it much cleaner and obvious for
maintenance.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
7f832645d0 dmaengine: ioatdma: remove ioatdma v2 registration
Removal of support for ioatdma v2 device support.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
85596a1947 dmaengine: ioatdma: remove ioat1 specific code
Cleaning up of ioat1 specific code as it is no longer supported

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:30 +05:30
d73f277b32 dmaengine: ioatdma: deprecating and removal of old ioatdma devices
Removal of any devices that are ioatdma pre-3.0. This is the first step
in attempting to clean up the ioatdma driver and remove hw no longer
supported.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:37:29 +05:30
5484526ac1 dmaengine: ioatdma: fix u16 overflow in cleanup
If the allocation order is 16, then the u16 count will overflow and wrap
to zero when assigned the value 1 << 16.

Change the type of 'total_descs' to int, so that it is large enough to
store a value equal or greater than 1 << 16.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:35:58 +05:30
870ce49022 dmaengine: ioatdma: fix u16 overflow in reshape
If the allocation order is 16, then the u16 index will overflow and wrap
to zero instead of being equal or greater than 1 << 16.  The loop
condition will always be true, and the loop will run until all the
memory resources are depleted.

Change the type of index 'i' to u32, so that it is large enough to store
a value equal or greater than 1 << 16.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-17 13:35:58 +05:30
3d8cc00073 dmaengine: ipu: Consolidate duplicated irq handlers
The functions irq_irq_err and ipu_irq_fn are identical plus/minus the
comments. Remove one.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-06 08:30:57 +05:30
425e20fd08 dmaengine: ipu: Prepare irq handlers for irq argument removal
The irq argument of most interrupt flow handlers is unused or merily
used instead of a local variable. The handlers which need the irq
argument can retrieve the irq number from the irq descriptor.

Search and update was done with coccinelle and the invaluable help of
Julia Lawall.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-06 08:30:57 +05:30
67a6eedc4d dmaengine: xdmac: Add scatter gathered memset support
The XDMAC also supports memset operations over discontiguous areas. Add the
necessary logic to support this.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-06 08:30:56 +05:30
ed9c87b331 dmaengine: zxdma: Fix force stop bug
DMA will not stop when clearing enable bit till all transaction
is done. The bug is exposed in audio playback because ring DMA
chain never stop. Force hardware to stop with setting FORCE bit.

Signed-off-by: Jun Nie <jun.nie@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 11:03:23 +05:30
2092539b77 dmaengine: zxdma: Fix data width bug
Align src and dst width to fix data alignment issue as
trailing single transaction that does not fill a full
burst require identical src/dst data width.
Burst length limitation can be addressed well too.

Signed-off-by: Jun Nie <jun.nie@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 11:03:23 +05:30
77a68e56aa dmaengine: Add an enum for the dmaengine alignment constraints
Most drivers need to set constraints on the buffer alignment for async tx
operations. However, even though it is documented, some drivers either use
a defined constant that is not matching what the alignment variable expects
(like DMA_BUSWIDTH_* constants) or fill the alignment in bytes instead of
power of two.

Add a new enum for these alignments that matches what the framework
expects, and convert the drivers to it.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 10:53:52 +05:30
28eb232f21 dmaengine: ti-dma-crossbar: Fix checking return value of devm_ioremap_resource
devm_ioremap_resource returns ERR_PTR on failure.

Signed-off-by: Axel Lin <axel.lin@ingics.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 10:40:06 +05:30
2f2560e348 dmaengine: zxdma: Support cyclic dma
Support cyclic dma for audio playback

Signed-off-by: Jun Nie <jun.nie@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 09:01:08 +05:30
8391ecf465 dmaengine: imx-sdma: Add device to device support
This patch adds DEV_TO_DEV support for i.MX SDMA driver to support data
transfer between two peripheral FIFOs.
The per_2_per script requires two peripheral addresses and two DMA
requests, and it need to check the src addr and dst addr is in the SPBA
bus space or in the AIPS bus space.

Signed-off-by: Shengjiu Wang <shengjiu.wang@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-05 08:46:34 +05:30
3f6d9e0896 dmaengine fixes for 4.2-rc5
We had a regression due to reuse of descriptor so we have reverted that.
   Rest are driver fixes
      at_hdmac and at_xdmac for residue, trannfer width, and channel config
      pl330 final fix for dma fails and overflow issue
      xgene resouce map fix
      mv_xor big endian op fix
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJVvOweAAoJEHwUBw8lI4NH21QP/0D8rEh/iXVUZOUqp7ANp+NX
 B96LMvxTmc7Vn8C7dLeMvktZy+SSvlSrG2kqN+X02syhttWjXvvwEYUDw6/InLCy
 ZnXzPxFmPPZEIGiUqb0zFbfUSYtV/7qjTGcXdamxWR3dw2ti1114sQ4K4RfMUvgh
 9aU8PmFw3PYMi1w9boxaoU5KHIAc8zogcKHo21mxSzFPOa9ej4Bcaxa1AtKCsawG
 lPBbjKI7/VWtvMReMF2GVK/mummZ03Iro+iXGL78QUud2hlcxbF7OLPuFHazhi7x
 B8PprnvbVk/DDRy9zO3EVVRpEgWa0E4ms24UKt2eg06k8o/ibaqdZsGR6QpqLmZI
 bl26tQiBpoX1PBxgP8w+6v84FXDzE8pA64dt5t0mCnFrcehyCfPek4P5UmbbfAo1
 S4AH4E9vlNQbjyhB6MYSZD0Ck8BmxxrHqzp/xbUzfRl0Qsyqe9zyaSOraqcmveAZ
 XCETHDb82EetOJh8ukWPGw95Pi9rrKX98FZFWKU8+oxePlGPIeVc3s7T06hj+j+Y
 9ShalP9TG56kmIRGvKFmxW5T9VGQWu/GiglN8LtJSN1hrGAxyaK4QCD8nnYBrxvG
 59WwR/XjkQhldxH3IhuU7LqaphOzOcokFX5kD5imyYRMTQsMjL89LYXshw+8DsQw
 mzZsRA6L3777Zq9SlnsF
 =X0jd
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-fix-4.2-rc5' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine fixes from Vinod Koul:
 "We had a regression due to reuse of descriptor so we have reverted
  that.

  The rest are driver fixes:

   - at_hdmac and at_xdmac for residue, trannfer width, and channel config
   - pl330 final fix for dma fails and overflow issue
   - xgene resouce map fix
   - mv_xor big endian op fix"

* tag 'dmaengine-fix-4.2-rc5' of git://git.infradead.org/users/vkoul/slave-dma:
  Revert "dmaengine: virt-dma: don't always free descriptor upon completion"
  dmaengine: mv_xor: fix big endian operation in register mode
  dmaengine: xgene-dma: Fix the resource map to handle overlapping
  dmaengine: at_xdmac: fix transfer data width in at_xdmac_prep_slave_sg()
  dmaengine: at_hdmac: fix residue computation
  dmaengine: at_xdmac: fix bug about channel configuration
  dmaengine: pl330: Really fix choppy sound because of wrong residue calculation
  dmaengine: pl330: Fix overflow when reporting residue in memcpy
2015-08-01 12:47:04 -07:00
50ba22479c Merge back earlier ACPI PM material for v4.3. 2015-07-31 21:40:03 +02:00
8c8fe97b2b Revert "dmaengine: virt-dma: don't always free descriptor upon completion"
This reverts commit b9855f03d560d351e95301b9de0bc3cad3b31fe9.
The patch break existing DMA usage case. For example, audio SOC
dmaengine never release channel and cause virt-dma to cache too
much memory in descriptor to exhaust system memory.

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31 20:33:43 +05:30
0ec9ebc706 dmaengine: mv_xor: fix big endian operation in register mode
Commit 6f166312c6ea2 ("dmaengine: mv_xor: add support for a38x command
in descriptor mode") introduced the support for a feature that
appeared in Armada 38x: specifying the operation to be performed in a
per-descriptor basis rather than globally per channel.

However, when doing so, it changed the function mv_chan_set_mode() to
use:

  if (IS_ENABLED(__BIG_ENDIAN))

instead of:

  #if defined(__BIG_ENDIAN)

While IS_ENABLED() is perfectly fine for CONFIG_* symbols, it is not
for other symbols such as __BIG_ENDIAN that is provided directly by
the compiler. Consequently, the commit broke support for big-endian,
as the XOR_DESCRIPTOR_SWAP flag was not set in the XOR channel
configuration register.

The primarily visible effect was some nasty warnings and failures
appearing during the self-test of the XOR unit:

[    1.197368] mv_xor d0060900.xor: error on chan 0. intr cause 0x00000082
[    1.197393] mv_xor d0060900.xor: config       0x00008440
[    1.197410] mv_xor d0060900.xor: activation   0x00000000
[    1.197427] mv_xor d0060900.xor: intr cause   0x00000082
[    1.197443] mv_xor d0060900.xor: intr mask    0x000003f7
[    1.197460] mv_xor d0060900.xor: error cause  0x00000000
[    1.197477] mv_xor d0060900.xor: error addr   0x00000000
[    1.197491] ------------[ cut here ]------------
[    1.197513] WARNING: CPU: 0 PID: 1 at ../drivers/dma/mv_xor.c:664 mv_xor_interrupt_handler+0x14c/0x170()

See also:

  http://storage.kernelci.org/next/next-20150617/arm-mvebu_v7_defconfig+CONFIG_CPU_BIG_ENDIAN=y/lab-khilman/boot-armada-xp-openblocks-ax3-4.txt

Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Fixes: 6f166312c6ea2 ("dmaengine: mv_xor: add support for a38x command in descriptor mode")
Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31 20:33:43 +05:30
cda8e93719 dmaengine: xgene-dma: Fix the resource map to handle overlapping
There is an overlap in dma ring cmd csr region due to sharing of ethernet
ring cmd csr region. This patch fix the resource overlapping by mapping
the entire dma ring cmd csr region.

Signed-off-by: Rameshwar Prasad Sahu <rsahu@apm.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31 20:33:43 +05:30
1c8a38b126 dmaengine: at_xdmac: fix transfer data width in at_xdmac_prep_slave_sg()
This patch adds the missing update of the transfer data width in
at_xdmac_prep_slave_sg().

Indeed, for each item in the scatter-gather list, we check whether the
transfer length is aligned with the data width provided by
dmaengine_slave_config(). If so, we directly use this data width for the
current part of the transfer we are preparing. Otherwise, the data width
is reduced to 8 bits (1 byte). Of course, the actual number of register
accesses must also be updated to match the new data width.

So one chunk was missing in the original patch (see Fixes tag below): the
number of register accesses was correctly set to (len >> fixed_dwidth) in
mbr_ubc but the real data width was not updated in mbr_cfg. Since mbr_cfg
may change for each part of the scatter-gather transfer this also explains
why the original patch used the Descriptor View 2 instead of the
Descriptor View 1.

Let's take the example of a DMA transfer to write 8bit data into an Atmel
USART with FIFOs. When FIFOs are enabled in the USART, its Transmit
Holding Register (THR) works in multidata mode, that is to say that up to
4 8bit data can be written into the THR in a single 32bit access and it is
still possible to write only one data with a 8bit access. To take
advantage of this new feature, the DMA driver was modified to allow
multiple dwidths when doing slave transfers.
For instance, when the total length is 22 bytes, the USART driver splits
the transfer into 2 parts:

First part: 20 bytes transferred through 5 32bit writes into THR
Second part: 2 bytes transferred though 2 8bit writes into THR

For the second part, the data width was first set to 4_BYTES by the USART
driver thanks to dmaengine_slave_config() then at_xdmac_prep_slave_sg()
reduces this data width to 1_BYTE because the 2 byte length is not aligned
with the original 4_BYTES data width. Since the data width is modified,
the actual number of writes into THR must be set accordingly.

Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Fixes: 6d3a7d9e3ada ("dmaengine: at_xdmac: allow muliple dwidths when doing slave transfers")
Cc: stable@vger.kernel.org #4.0 and later
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31 20:33:42 +05:30
93dce3a643 dmaengine: at_hdmac: fix residue computation
As claimed by the programmer datasheet and confirmed by the IP designer,
the Block Transfer Size (BTSIZE) bitfield of the Channel x Control A
Register (CTRLAx) always refers to a number of Source Width (SRC_WIDTH)
transfers.

Both the SRC_WIDTH and BTSIZE bitfields can be extacted from the CTRLAx
register to compute the DMA residue. So the 'tx_width' field is useless
and can be removed from the struct at_desc.

Before this patch, atc_prep_slave_sg() was not consistent: BTSIZE was
correctly initialized according to the SRC_WIDTH but 'tx_width' was always
set to reg_width, which was incorrect for MEM_TO_DEV transfers. It led to
bad DMA residue when 'tx_width' != SRC_WIDTH.

Also the 'tx_width' field was mostly set only in the first and last
descriptors. Depending on the kind of DMA transfer, this field remained
uninitialized for intermediate descriptors. The accurate DMA residue was
computed only when the currently processed descriptor was the first or the
last of the chain. This algorithm was a little bit odd. An accurate DMA
residue can always be computed using the SRC_WIDTH and BTSIZE bitfields
in the CTRLAx register.

Finally, the test to check whether the currently processed descriptor is
the last of the chain was wrong: for cyclic transfer, last_desc->lli.dscr
is NOT equal to zero, since set_desc_eol() is never called, but logically
equal to first_desc->txd.phys. This bug has a side effect on the
drivers/tty/serial/atmel_serial.c driver, which uses cyclic DMA transfer
to receive data. Since the DMA residue was wrong each time the DMA
transfer reaches the second (and last) period of the transfer, no more
data were received by the USART driver till the cyclic DMA transfer loops
back to the first period.

Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Acked-by: Torsten Fleischer <torfl6749@gmail.com>
Tested-by: Jirí Prchal <jiri.prchal@aksignal.cz>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31 20:33:42 +05:30
20cadcb4df dmaengine: at_xdmac: fix bug about channel configuration
When using descriptor view 2 or higher, we don't write the configuration
into AT_XDMAC_CC register because this configuration will be fetch from
the descriptor. Unfortunately, the PROT bit is not updated with this
method, we have to do it manually before enabling the channel.

Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-31 20:33:41 +05:30
667dfed986 dmaengine: add a driver for Intel integrated DMA 64-bit
Intel integrated DMA (iDMA) 64-bit is a specific IP that is used as a part of
LPSS devices such as HSUART or SPI. The iDMA IP is attached for private
usage on each host controller independently.

While it has similarities with Synopsys DesignWare DMA, the following
distinctions doesn't allow to use the existing driver:
- 64-bit mode with corresponding changes in Hardware Linked List data structure
- many slight differences in the channel registers

Moreover this driver is based on the DMA virtual channels framework that helps
to make the driver cleaner and easy to understand.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
2015-07-28 09:56:17 +01:00
1eb995bbf7 dmaengine: ti-dma-crossbar: Add support for eDMA
The crossbar for eDMA works exactly the same way as sDMA, but sDMA
requires an offset of 1, while no offset is needed for eDMA.

Based on the patch from Misael Lopez Cruz <misael.lopez@ti.com>

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
CC: Misael Lopez Cruz <misael.lopez@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-22 19:56:07 +05:30
be559daabf dmaengine: ti-dma-crossbar: Make idr xbar instance-specific
In preparation for supporting multiple DMA crossbar instances,
make the idr xbar instance specific.

Signed-off-by: Misael Lopez Cruz <misael.lopez@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-22 19:56:07 +05:30
da89947b47 Update Viresh Kumar's email address
Switch to my kernel.org alias instead of a badly named gmail address,
which I rarely use.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-07-17 16:39:53 -07:00
9bde2823dc dmaengine: zxdma: explicitly free irq on device removal
At device removal, tasklets are not disabled and irqs are still enabled, so
free the irq explicitly on device removal

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-16 18:58:50 +05:30
e3fa9841d3 dmaengine: zxdma: Support ZTE ZX296702 dma
Add ZTE ZX296702 dma controller support. Only
device tree probe is support currently.

Signed-off-by: Jun Nie <jun.nie@linaro.org>
Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-16 18:58:50 +05:30
4d9efdfce7 dmaengine: ipu: Use irq_desc_get_xxx() to avoid redundant lookup of irq_desc
Use irq_desc_get_xxx() to avoid redundant lookup of irq_desc while we
already have a pointer to corresponding irq_desc.

This is also a preparation for the removal of the 'irq' argument from
interrupt flow handlers.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-16 18:41:18 +05:30
d7fdb35690 dmaengine: ipu: Consolidate chained IRQ handler install/remove
Chained irq handlers usually set up handler data as well. We now have
a function to set both under irq_desc->lock. Replace the two calls
with one.

Search and conversion was done with coccinelle.

Reported-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: dmaengine@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-16 18:40:52 +05:30
03734485b7 dmaengine: hsu: remove excessive lock
All hardware accesses are done under virtual channel lock. That's why specific
channel lock is excessive and can be removed safely. This has been tested on
Intel Medfield and Merrifield.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-16 18:30:46 +05:30
b6c52c6345 dmaengine: ioatdma: Ignore IOAT devices under hotplug-capable PCI host bridge
The dmaengine core assumes that async DMA devices will only be removed
when they not used anymore, or it assumes dma_async_device_unregister()
will only be called by dma driver exit routines. But this assumption is
not true for the IOAT driver, which calls dma_async_device_unregister()
from ioat_remove(). So current IOAT driver doesn't support device
hot-removal because it may cause system crash to hot-remove an inuse
IOAT device.

To support CPU socket hot-removal, all PCI devices, including IOAT
devices embedded in the socket, will be hot-removed. The idea solution
is to enhance the dmaengine core and IOAT driver to support hot-removal,
but that's too hard.

This patch implements a hack to disable IOAT devices under hotplug-capable
CPU socket so it won't break socket hot-removal.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-16 18:28:28 +05:30
7618d0359c dmaengine: ioatdma: Set non RAID channels to be private capable
This allows claiming of non-RAID channels as a private channel. This
prevents breakage of MDRAID using the IOATDMA channels via
async_tx but also allows agents such as NTB to claim channels
exclusively for its usages.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-07-07 09:54:32 +05:30