diff options
author | Laxman Dewangan <ldewangan@nvidia.com> | 2010-04-09 22:24:25 +0530 |
---|---|---|
committer | Gary King <gking@nvidia.com> | 2010-04-13 15:57:22 -0700 |
commit | c31c451d72bc2c6e3973adf4d3356a872cc77539 (patch) | |
tree | 690fb78a2e02b0d35f2a823ee31de2994930385c /arch | |
parent | e1a99bd90a73cd427aed02bb9bbee49c7cc09c8a (diff) |
tegra uart: Fixing the tx and rx dma path issue
Following issue has been fixed:
- Blocking write was returning immediatly if data request is multiple of 4.
- Blocking write was not able to complete if data length is nonmultiple of 4
and more than 4.
- The close was taking too much time because proper timeout and fifo size was
not configured.
- Tx dma path optimized to fill more data to dma buffer if there is more
pending chars in the buffer.
- Tx path is fixed to properly signal the wakup event to tty layer.
- RTS flow control is not getting set from second open even cflag is
requested for that.
- Rx dma was not receiving the correct data after second open. The multiple
request was
getting queued for the receive path at the time of closing.
- Rx dma was started before uart controller is configured and it is creating
to misbehave the dma.
- Transfer count was not getting calculated in the dma driver.
Pending issue:
- Loosing the data id more than 32K of data was sennt is single shot. Debugging
this.
Tested on harmony with different testcase developed for testing the linux serial
driver.
Change-Id: I6ed9095dd6340d2b5e7ef036823d2e4e5a61abcc
Reviewed-on: http://git-master/r/1065
Tested-by: Suresh Mangipudi <smangipudi@nvidia.com>
Reviewed-by: Udaykumar Rameshchan Raval <uraval@nvidia.com>
Tested-by: Udaykumar Rameshchan Raval <uraval@nvidia.com>
Reviewed-by: Gary King <gking@nvidia.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/arm/mach-tegra/dma.c | 13 |
1 files changed, 11 insertions, 2 deletions
diff --git a/arch/arm/mach-tegra/dma.c b/arch/arm/mach-tegra/dma.c index aae36b30664d..4ad9e19ff2d3 100644 --- a/arch/arm/mach-tegra/dma.c +++ b/arch/arm/mach-tegra/dma.c @@ -179,7 +179,16 @@ int tegra_dma_dequeue_req(int channel, struct tegra_dma_req *_req) req_transfer_count = NV_DRF_VAL(APBDMACHAN_CHANNEL_0, CSR, WCOUNT, ch->csr); - req->bytes_transferred = req_transfer_count - to_transfer; + if (status & NV_DRF_DEF(APBDMACHAN_CHANNEL_0, STA, BSY, ACTIVE)) { + if (to_transfer) + req->bytes_transferred = (req_transfer_count - + to_transfer); + else + req->bytes_transferred = (req_transfer_count); + } else { + req->bytes_transferred = (req_transfer_count + 1); + } + req->bytes_transferred *= 4; /* In continous transfer mode, DMA only tracks the count of the * half DMA buffer. So, if the DMA already finished half the DMA @@ -191,7 +200,7 @@ int tegra_dma_dequeue_req(int channel, struct tegra_dma_req *_req) */ if (ch->mode & TEGRA_DMA_MODE_CONTINOUS) if (req->buffer_status == TEGRA_DMA_REQ_BUF_STATUS_HALF_FULL) { - req->bytes_transferred += 4 * req_transfer_count; + req->bytes_transferred += 4 * (req_transfer_count +1); } tegra_dma_stop(ch); |