summaryrefslogtreecommitdiff
path: root/include/linux/dma-attrs.h
AgeCommit message (Collapse)Author
2013-11-26common: DMA-mapping: add DMA_ATTR_ALLOC_EXACT_SIZE attributeVandana Salve
Add DMA_ATTR_ALLOC_EXACT_SIZE attribute to DMA-mapping subsystem By default dma_alloc/free_coherent allocates/release memory in order of 2^pages. By specifying this attribute, allocation/release can be done for exact size of memory thereby reducing internal memory fragmentation bug 1380639 Change-Id: I49eb6a0caeb85aa84ff75fab6a4cf3c6a6d96abb Signed-off-by: Vandana Salve <vsalve@nvidia.com> Reviewed-on: http://git-master/r/334416 GVS: Gerrit_Virtual_Submit Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-26common: DMA-mapping: add DMA_ATTR_SKIP_IOVA_GAP attributeHiroshi Doyu
Add DMA_ATTR_SKIP_IOVA_GAP attribute to control if there's gap pages needed or not from client. Bug 1303110 Bug 1173494 Change-Id: Ia8fb2b9b807661c861b5496a467b3ca91af8f435 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/275021
2013-09-14common: DMA-mapping: Add {read,write}-only attrHiroshi Doyu
Adds DMA_ATTR_{READ,WRITE}_ONLY attribute to the DMA-mapping subsystem. This sets mapping attribute read-only or write-only to be set by each IOMMU H/W. Bug 1309863 Change-Id: Ie3203014d83a519653d292c243e863244daa9675 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/260008 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-14common: DMA-mapping: add DMA_ATTR_SKIP_FREE_IOVA attributeHiroshi Doyu
This patch adds DMA_ATTR_SKIP_CPU_SYNC attribute to the DMA-mapping subsystem. This is the counter part of map_page_at() which just maps the pre-allocate iova to a page. With this attribute, unmap_page() unmap the link between iova and a page, leaving iova allocated. bug 1235233 Change-Id: Id5535b73e0ca212a045dd0b0ff57de8432e7cf13 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/204468 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2012-11-29common: DMA-mapping: add DMA_ATTR_FORCE_CONTIGUOUS attributeMarek Szyprowski
This patch adds DMA_ATTR_FORCE_CONTIGUOUS attribute to the DMA-mapping subsystem. By default DMA-mapping subsystem is allowed to assemble the buffer allocated by dma_alloc_attrs() function from individual pages if it can be mapped as contiguous chunk into device dma address space. By specifing this attribute the allocated buffer is forced to be contiguous also in physical memory. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2012-07-30common: DMA-mapping: add DMA_ATTR_SKIP_CPU_SYNC attributeMarek Szyprowski
This patch adds DMA_ATTR_SKIP_CPU_SYNC attribute to the DMA-mapping subsystem. By default dma_map_{single,page,sg} functions family transfer a given buffer from CPU domain to device domain. Some advanced use cases might require sharing a buffer between more than one device. This requires having a mapping created separately for each device and is usually performed by calling dma_map_{single,page,sg} function more than once for the given buffer with device pointer to each device taking part in the buffer sharing. The first call transfers a buffer from 'CPU' domain to 'device' domain, what synchronizes CPU caches for the given region (usually it means that the cache has been flushed or invalidated depending on the dma direction). However, next calls to dma_map_{single,page,sg}() for other devices will perform exactly the same sychronization operation on the CPU cache. CPU cache sychronization might be a time consuming operation, especially if the buffers are large, so it is highly recommended to avoid it if possible. DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of the CPU cache for the given buffer assuming that it has been already transferred to 'device' domain. This attribute can be also used for dma_unmap_{single,page,sg} functions family to force buffer to stay in device domain after releasing a mapping for it. Use this attribute with care! Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
2012-07-30common: DMA-mapping: add DMA_ATTR_NO_KERNEL_MAPPING attributeMarek Szyprowski
This patch adds DMA_ATTR_NO_KERNEL_MAPPING attribute which lets the platform to avoid creating a kernel virtual mapping for the allocated buffer. On some architectures creating such mapping is non-trivial task and consumes very limited resources (like kernel virtual address space or dma consistent address space). Buffers allocated with this attribute can be only passed to user space by calling dma_mmap_attrs(). Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2012-03-28common: DMA-mapping: add NON-CONSISTENT attributeMarek Szyprowski
DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either consistent or non-consistent memory as it sees fit. By using this API, you are guaranteeing to the platform that you have all the correct and necessary sync points for this memory in the driver. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de>
2012-03-28common: DMA-mapping: add WRITE_COMBINE attributeMarek Szyprowski
DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be buffered to improve performance. It will be used by the replacement for ARM/ARV32 specific dma_alloc_writecombine() function. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de>
2008-07-22powerpc/cell: Add DMA_ATTR_WEAK_ORDERING dma attribute and use in Cell IOMMU ↵Mark Nelson
code Introduce a new dma attriblue DMA_ATTR_WEAK_ORDERING to use weak ordering on DMA mappings in the Cell processor. Add the code to the Cell's IOMMU implementation to use this code. Dynamic mappings can be weakly or strongly ordered on an individual basis but the fixed mapping has to be either completely strong or completely weak. This is currently decided by a kernel boot option (pass iommu_fixed=weak for a weakly ordered fixed linear mapping, strongly ordered is the default). Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-04-29dma: add dma_*map*_attrs() interfacesArthur Kepner
Introduce new interfaces, dma_*map*_attrs(), for passing architecture-specific attributes when memory is mapped and unmapped for DMA. Give the interfaces default implementations which ignore attributes. Also introduce the dma_{set|get}_attr() interfaces for setting and retrieving individual attributes. Define one attribute, DMA_ATTR_WRITE_BARRIER, in anticipation of its use by ia64/sn. Select whether architectures implement arch-specific versions of the dma_*map*_attrs() interfaces via HAVE_DMA_ATTRS in Kconfig. [markn@au1.ibm.com: dma_{set,get}_attr() have to be static inline] Signed-off-by: Arthur Kepner <akepner@sgi.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Jes Sorensen <jes@sgi.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Mark Nelson <markn@au1.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>