diff options
author | Victor(Weiguo) Pan <wpan@nvidia.com> | 2010-07-23 14:47:21 -0700 |
---|---|---|
committer | Gary King <gking@nvidia.com> | 2010-07-26 09:43:47 -0700 |
commit | a355e80e1c02376b2a336a685896af817d2d5deb (patch) | |
tree | 138411a5db3c838bd594bbc54b46e48bd7c753b1 | |
parent | cab2cf99b1e377918629f0b701a438d01f5edefa (diff) |
[arm/tegra]DMA : Delayed ISR recovery routine.
Sometimes, due to high interrupt latency in the continuous mode
of DMA transfer, the half buffer complete interrupt is handled
after DMA transferred the full buffer. In this case, SW DMA state
and HW DMA state is out of sync. When out of sync detected,
stopping DMA immediately and restarting the DMA with next buffer
if next buffer is ready.
bug 696953
Change-Id: Ic4b7cb251e472a309e9583eedbd26ea5dfcfceb1
Reviewed-on: http://git-master/r/4351
Tested-by: Victor (Weiguo) Pan <wpan@nvidia.com>
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-by: Scott Peterson <speterson@nvidia.com>
Reviewed-by: Venkata (Muni) Anda <vanda@nvidia.com>
Reviewed-by: Gary King <gking@nvidia.com>
-rw-r--r-- | arch/arm/mach-tegra/dma.c | 33 |
1 files changed, 33 insertions, 0 deletions
diff --git a/arch/arm/mach-tegra/dma.c b/arch/arm/mach-tegra/dma.c index 0cc97b707417..9e62051917ef 100644 --- a/arch/arm/mach-tegra/dma.c +++ b/arch/arm/mach-tegra/dma.c @@ -628,6 +628,39 @@ static void handle_continuous_dma(struct tegra_dma_channel *ch) req = list_entry(ch->list.next, typeof(*req), node); if (req) { if (req->buffer_status == TEGRA_DMA_REQ_BUF_STATUS_EMPTY) { + bool is_dma_ping_complete; + is_dma_ping_complete = (readl(ch->addr + APB_DMA_CHAN_STA) + & STA_PING_PONG) ? true : false; + if( req->to_memory ) + is_dma_ping_complete = !is_dma_ping_complete; + /* Out of sync - Release current buffer */ + if( !is_dma_ping_complete ) { + int bytes_transferred; + + bytes_transferred = + (ch->csr & CSR_WCOUNT_MASK) >> CSR_WCOUNT_SHIFT; + bytes_transferred += 1; + bytes_transferred <<= 3; + req->buffer_status = TEGRA_DMA_REQ_BUF_STATUS_FULL; + req->bytes_transferred = bytes_transferred; + req->status = TEGRA_DMA_REQ_SUCCESS; + tegra_dma_stop(ch); + + if (!list_is_last(&req->node, &ch->list)) { + struct tegra_dma_req *next_req; + + next_req = list_entry(req->node.next, + typeof(*next_req), node); + tegra_dma_update_hw(ch, next_req); + } + + list_del(&req->node); + + /* DMA lock is NOT held when callbak is called */ + spin_unlock(&ch->lock); + req->complete(req); + return; + } /* Load the next request into the hardware, if available * */ if (!list_is_last(&req->node, &ch->list)) { |