summaryrefslogtreecommitdiff
path: root/arch/arm/mm/dma-mapping.c
AgeCommit message (Collapse)Author
2014-04-23arm: dma-mapping: fix wrong print sizeHiroshi Doyu
Please refer to "Documentation/printk-formats.txt". Change-Id: I3b14a2dfbe98edce0c09fcef6d5609299c59e64a Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/394698 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-23arm: dma-mapping: fix wrong end addressHiroshi Doyu
The end address shouldn't overwrap the next start address. Change-Id: I48f95f43b79b9e678908a2a6f0562347d671fcad Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/394697 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-04arm: dma-mapping: fix incorrect phys address in trace messageKrishna Reddy
Fix incorrect phys address printed in trace message in arm_coherent_iommu_unmap_page(). iommu_iova_to_phys() is called after unmap iova in arm_coherent_iommu_unmap_page(). This causes invalid phys address printed in trace message. Change-Id: I94acbfc9ef6f25c765ed3057b474596c8f12c6dc Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/391952 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com>
2014-04-04arm: dma-mapping: add missing trace print in unmap_pageKrishna Reddy
Add missing trace print in arm_iommu_unmap_page. Change-Id: I77914350aadccb0688b02658185e26652d4fdf2b Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/391951 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com>
2014-04-04arm: dma-mapping: pass correct args to trace_dma_map_pageKrishna Reddy
Change-Id: Id97ea4af04323f4a4c18597a4dba0423c3061a27 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/391950 GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com>
2014-04-02ARM: dma-mapping: add device_is_iommuable()Hiroshi Doyu
A helper function to check if a device is iommuable or not. Bug 1490400 Change-Id: I65b803e57ad19e32214df8fde054a4f68535a74b Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/390156 Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com>
2014-03-13Merge branch 'linux-3.10.33' into dev-kernel-3.10Deepak Nibade
Bug 1456092 Change-Id: I3021247ec68a3c2dddd9e98cde13d70a45191d53 Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
2014-03-06ARM: dma-mapping: fix GFP_ATOMIC macro usageMarek Szyprowski
commit 10c8562f932d89c030083e15f9279971ed637136 upstream. GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an atomic allocation, the code must test __GFP_WAIT flag presence. This patch fixes the issue introduced in v3.6-rc5 Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-12-19ARM: dma-mapping: exclude DMA32, HIGHMEM for meta dataHiroshi Doyu
slab allocator doens't acceppt DMA32, HIGHMEM gfp flags, and the passed gfp param is for a buffer usage, not for meta data, page porter array. Drop DMA32, HIGHMEM for meta data allocation. Bug 1414172 Bug 1427490 Change-Id: Ie894c110ada185aed04d5fc41a46fa1de4bd9670 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/346947 GVS: Gerrit_Virtual_Submit
2013-12-12ARM: dma-mapping: alloc highmem w/o GFP_DMA*Hiroshi Doyu
There's a case that a client wants to avoid highmem explicitly. For example some PM code lets firmware access to pages during suspend/resume when IOMMU is off. This allows a client to specify a type of pages{GPF_DMA,GFP_DMA32} to not contradicting to GFP_HIGHMEM. This would be valid if AArch64 selects ZONE_DMA(32). Bug 1414172 Change-Id: I3c97262f9388094dae12a350420a3d184d1e7144 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/343645 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2013-12-11ARM: dma-mapping: Use %pa for {dma,phys}_addr_tHiroshi Doyu
The data size varis depending on LPAE, 32 or 64. Modified to support both without any build warnings. Change-Id: Iacb8cbbffdab9f7148a62315e011c9a3a74d9030 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/339861 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-11-28arm: dma-mapping: support DMA_ATTR_ALLOC_EXACT_SIZEVandana Salve
This new attribute can handle allocation & release of memory of exact sizes by making call to attr version of dma_alloc_from_coherent/dma_release_from_coherent bug 1380639 Change-Id: I2af8c8131ff552ae5e0ac3a628139318b3395a73 Signed-off-by: Vandana Salve <vsalve@nvidia.com> Reviewed-on: http://git-master/r/334000 Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2013-09-27ARM: dma-mapping: Undefined debug_dma_platformdataHiroshi Doyu
Fix undefined reference to debug_dma_platformdata Bug 1373902 Change-Id: I77544b64f84e8e43a9bfb873f6b2af375d341f0d Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/278134 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> GVS: Gerrit_Virtual_Submit
2013-09-27dma-debug: Use ftrace in {map,unmap}_*() callsKonsta Holtta
Log the map/unmap/alloc/free calls. This ftrace is enabled by default. Bug 1173494 Change-Id: I01fe24e570346413644368a6bff1578814f05f5a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/268383 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-27dma-debug: dump buffers and mappings via debugfsKonsta Holtta
Export via debugfs the debug-dma infrastructure's data about allocated mappings, and architecture specific information about possible mappings. Bug 1173494 Change-Id: I6c64364dad69f83fd301a89938fe184dde33806a Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/268384 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-26ARM: dma-mapping: Support DMA_ATTR_SKIP_IOVA_GAPHiroshi Doyu
This new attribute can control to append gap pages or not on each allocation via DMA attribute. The same needs to be done with freeing too. Bug 1303110 Bug 1173494 Change-Id: I6a038fbfb3e960d740fdc9aa752e2b88313af4b0 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/275022
2013-09-26ARM: dma-mapping: IOVA API chagneHiroshi Doyu
Needed along with the common IOVA API chagne to pass DMA attribute. Bug 1303110 Bug 1173494 Change-Id: Ia12984bbaf2486b0c1031feae983967e011270f2 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/275019
2013-09-26ARM: dma-mapping: Introduce iova_gap_physHiroshi Doyu
The variable "iova_gap_pages" is global in this file so that assigning iova_gap_phys for its physical adress could be a bit easier to read instead of converting via page_to_phys() at everytime. Bug 1303110 Bug 1173494 Change-Id: Ifb8c5154fd5cc07d76e678fbf0ca84cb5c919f72 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/275017
2013-09-26ARM: dma-mapping: Rename prefech_pages to iova_gap_pagesHiroshi Doyu
Use correct name, "iova_gap_page". This pages are not only for prefetcher but also guarding between IOVA allocations so that we call this guard pages, which can include a prefetch page and/or a guard page not to confuse. Bug 1303110 Bug 1173494 Change-Id: Ie46cc8e33e2fdd520931d05f087a075af52c1a81 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/275016
2013-09-16ARM: dma-mapping: Fix build warningHiroshi Doyu
Inconsistent type for set_dma_ops() Change-Id: I78ebecaa335cafe627dc673cf54590d3354a7dc8 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/273536 Reviewed-by: Vandana Salve <vsalve@nvidia.com>
2013-09-14ARM: dma-mapping: Fix IOVA end addr check strictlyHiroshi Doyu
At IOVA area allocation, its end address check isn't enough strict in the case of __alloc_iova_at(). Bug 1353121 Bug 1343762 Change-Id: Iebb1b100313ff70c23bbf262dddddfde1a52727b Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/265018 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-14ARM: dma-mapping: Set iommu_ops before attachHiroshi Doyu
Make iommu_ops available before iommu_attach_devce() Bug 1297607 Change-Id: I41f6f8c71e7056f67f8245bbcddd1cd6f3ecf5bf Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/264253 Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
2013-09-14ARM: dma-mapping: Pass DMA attrs as IOMMU protHiroshi Doyu
Pass DMA attribute as IOMMU property, which can be proccessed in the backend implementation of IOMMU. For example, DMA_ATTR_READ_ONLY can be translated into each IOMMU H/W implementaion. Bug 1309863 Change-Id: I0921ec35a4e056ea45a79acbea3e3b58a5a13a66 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/260009 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-14arm: mm: dma-mapping: don't let prefetch page mapping failKonsta Holtta
Require also the prefetch page to map in pg_iommu_map*(): return error if either it or the requested buffer fail to map. Bug 1303110 Bug 1338469 Change-Id: I57f59207fb547390d87a24d7c4c7faec23a61768 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Reviewed-on: http://git-master/r/256163 (cherry picked from commit b6fe2c18b9528a112943a054866a452f1fff3425) Reviewed-on: http://git-master/r/259519 Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com>
2013-09-14arm: mm: dma-mapping: add support for prefech and gap pagesKrishna Reddy
add support for prefetch and gap pages to be part of iova allocations and mapping. prefetch pages are necessary to avoid smmu faults, which are the result of hw engines speculatively fetching beyond the iova mapped area. gap pages are to separate the iova allcoations in order catch iova access violations. Bug 1303110 Bug 1265246 Bug 1215880 Bug 1327616 Change-Id: Ieacc0cd0a82e7f93746b453dafcec6a1766088a6 Signed-off-by: Konsta Holtta <kholtta@nvidia.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/246693 (cherry picked from commit 889be8cbffa184c38f31542546d1f1ffbe8d8502) Signed-off-by: Ajay Nandakumar <anandakumarm@nvidia.com> Reviewed-on: http://git-master/r/264780
2013-09-14ARM: dma-mapping: Use iommu_map_sg() in dma_map_sg()Hiroshi Doyu
Use iommu_map_sg() in dma_map_sg() for perf instead of calling iommu_map() repeatedly. Bug 1304956 Bug 1327616 Change-Id: Ib5941f719fdf822a166fbbb0dc3fad22e2767e21 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/253307 (cherry picked from commit 1765eb73dd6f668e8f9fde99230af8fcba0bd906) Signed-off-by: Ajay Nandakumar <anandakumarm@nvidia.com> Reviewed-on: http://git-master/r/264779
2013-09-14ARM: dma-mapping: Add dma_alloc_*at*_coherent()Hiroshi Doyu
Add the version we can specify which IOVA to allocate. Bug 1309498 Bug 1327616 Change-Id: I434171b09e40f888190b696b567d25777c69bb45 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/247938 (cherry picked from commit 063bf38038fa11b2ba0b7af64a2151b74dee8516) Signed-off-by: Ajay Nandakumar <anandakumarm@nvidia.com> Reviewed-on: http://git-master/r/250834 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Dan Willemsen <dwillemsen@nvidia.com> Tested-by: Dan Willemsen <dwillemsen@nvidia.com>
2013-09-14ARM: dma-mapping: Round-up IOVA map baseHiroshi Doyu
This is necessary for iova_alloc_at(). On high order allocation, the lower bit of base was ignored, and it returns incorrect IOVA address. bug 1274699 bug 1254010 bug 1226176 bug 999937 Change-Id: I0be96b97c8036f8a5bc1c35a1c85e04593021a2b Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/228729 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2013-09-14ARM: dma-mapping: Add new API dma_ops->map_pages()Hiroshi Doyu
Add new API dma_ops->map_pages() for performance bug 1254010 bug 1226176 bug 999937 Change-Id: Ib8bbcad53024225173be765358af03d0961f8af0 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/225673 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-14common: DMA-mapping: add DMA_ATTR_SKIP_FREE_IOVA attributeHiroshi Doyu
This patch adds DMA_ATTR_SKIP_CPU_SYNC attribute to the DMA-mapping subsystem. This is the counter part of map_page_at() which just maps the pre-allocate iova to a page. With this attribute, unmap_page() unmap the link between iova and a page, leaving iova allocated. bug 1235233 Change-Id: Id5535b73e0ca212a045dd0b0ff57de8432e7cf13 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/204468 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-14ARM: dma-mapping: Trace IOMMU atomic allocationHiroshi Doyu
Add dev_dbg() to trance IOMMU atomic allocation, __iommu_{alloc,free}_atomic(). Usage: echo -n 'func __iommu_alloc_atomic +p' > /sys/kernel/debug/dynamic_debug/control echo -n 'func __iommu_free_atomic +p' > /sys/kernel/debug/dynamic_debug/control Change-Id: Ibe322c2d009652eb6b07fd988ff1fff97268e207 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/197752 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
2013-09-14ARM: dma-mapping: Allow unmap non page backed addressHiroshi Doyu
pfn_valid() should be done in a caller function if *needed*. map/unmap care about address mapping only but should not care about if the address is a valid page or not. This allows to unmap out of kernel control address by dma mapping API, for exmaple ones mapped by dma_map_linear(-ENXIO) bug 1222494 Change-Id: I4d59e4078edf3c8876da8f4492bd0c306b693815 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/194630 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-09-14ARM: dma-mapping: iova_alloc_at() handles out-of-range separatelyHiroshi Doyu
iova_alloc_at() sets -ENXIO at *iova for out-of-range, and -EINVAL for the rest failure cases. We need to differenciate out-of-range to know if 'da' reservation fails because of overwrapping or out-of-range. Bug 1182882 Bug 1024594 Change-Id: If0441c0521ef74c8792a8f0562a821ca76dbbc34 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/169327 Reviewed-by: Simone Willett <swillett@nvidia.com> Tested-by: Simone Willett <swillett@nvidia.com>
2013-09-14ARM: dma-mapping: New dma_map_ops->map_page*_at* functionHiroshi Doyu
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Rebase-Id: R6abb22d807d44a6a1e876116c58ab0e58c441e65
2013-09-14ARM: dma-mapping: New dma_map_ops->iova_alloc*_at* functionHiroshi Doyu
To allocate IOVA area at specified address Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Rebase-Id: R6070776b91e090e2a3dd342743e8e07edd63b747
2013-09-14ARM: dma-mapping: New dma_map_ops->iova_{alloc,free}() functionsHiroshi Doyu
There are some cases that IOVA allocation and mapping have to be done seperately, especially for perf optimization reasons. This patch allows client modules to {alloc,free} IOVA space without backing up actual pages for that area. Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Rebase-Id: Rfe3d8edc2bdec9e69f90a76d8229974208b6bdf5
2013-09-14ARM: dma-mapping: New dma_map_ops->iova_get_free_{total,max} functionsHiroshi Doyu
->iova>_get_free_total() returns the sum of available free areas. ->iova>_get_free_max() returns the largest available free area size. Change-Id: I817ff46bea799a809e2e6b163b2e86cb737cf077 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Rebase-Id: R9fa69019291e268816b8b316c98e4f39d8555ea3
2013-09-14ARM: dma-mapping: Skip cache maint for invalid pageHiroshi Doyu
Skip cache maint for invalid page but warn it. Change-Id: I780587d132fd9440767046877c0724939897889f Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Rebase-Id: Rb3610fd43422fbce269431146defbc6eb7272de8
2013-09-14ARM: dma: Drop GFP_COMP for DMA-IOMMU memory allocationsHiroshi Doyu
dma_alloc_coherent wants to split pages after allocation in order to reduce the memory footprint. This does not work well with GFP_COMP pages, so drop this flag before allocation. This patch is ported from arch/avr32 (commit 3611553ef985ef7c5863c8a94641738addd04cff). Change-Id: I67eb2b15807c36c6a3ade95ee35bf5856d8e0e11 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Rebase-Id: Rf13a569ebe836d341cf4d9287ff4b4711dde85cc
2013-05-02Merge branches 'devel-stable', 'entry', 'fixes', 'mach-types', 'misc' and ↵Russell King
'smp-hotplug' into for-linus
2013-04-17ARM: 7693/1: mm: clean-up in order to reduce to call kmap_high_get()Joonsoo Kim
In kmap_atomic(), kmap_high_get() is invoked for checking already mapped area. In __flush_dcache_page() and dma_cache_maint_page(), we explicitly call kmap_high_get() before kmap_atomic() when cache_is_vipt(), so kmap_high_get() can be invoked twice. This is useless operation, so remove one. v2: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to be self-documented Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-14ARM: DMA-mapping: add missing GFP_DMA flag for atomic buffer allocationMarek Szyprowski
Atomic pool should always be allocated from DMA zone if such zone is available in the system to avoid issues caused by limited dma mask of any of the devices used for making an atomic allocation. Reported-by: Krzysztof Halasa <khc@pm.waw.pl> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Stable <stable@vger.kernel.org> [v3.6+]
2013-02-25ARM: DMA-mapping: fix memory leak in IOMMU dma-mapping implementationMarek Szyprowski
This patch removes page_address() usage in IOMMU-aware dma-mapping implementation and replaced it with direct use of the cpu virtual address provided by the caller. page_address() returned incorrect address for pages remapped in atomic pool, what caused memory leak. Reported-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2013-02-25ARM: dma-mapping: Add maximum alignment order for dma iommu buffersSeung-Woo Kim
Alignment order for a dma iommu buffer is set by buffer size. For large buffer, it is a waste of iommu address space. So configurable parameter to limit maximum alignment order can reduce the waste. Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: Kyungmin.park <kyungmin.park@samsung.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: use himem for DMA buffers for IOMMU-mapped devicesMarek Szyprowski
IOMMU can provide access to any memory page, so there is no point in limiting the allocated pages only to lowmem, once other parts of dma-mapping subsystem correctly supports himem pages. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: add support for CMA regions placed in highmem zoneMarek Szyprowski
This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Michal Nazarewicz <mina86@mina86.com>
2013-02-25arm: dma mapping: export arm iommu functionsPrathyush K
This patch adds EXPORT_SYMBOL_GPL calls to the three arm iommu functions - arm_iommu_create_mapping, arm_iommu_free_mapping and arm_iommu_attach_device. These three functions are arm specific wrapper functions for creating/freeing/using an iommu mapping and they are called by various drivers. If any of these drivers need to be built as dynamic modules, these functions need to be exported. Changelog v2: using EXPORT_SYMBOL_GPL as suggested by Marek. Signed-off-by: Prathyush K <prathyush.k@samsung.com> [m.szyprowski: extended with recently introduced EXPORT_SYMBOL_GPL(arm_iommu_detach_device)] Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: Add arm_iommu_detach_device()Hiroshi Doyu
A counter part of arm_iommu_attach_device(). Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: Set arm_dma_set_mask() for iommu->set_dma_mask()Hiroshi Doyu
struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-08ARM: DMA mapping: fix bad atomic testRussell King
Realview fails to boot with this warning: BUG: spinlock lockup suspected on CPU#0, init/1 lock: 0xcf8bde10, .magic: dead4ead, .owner: init/1, .owner_cpu: 0 Backtrace: [<c00185d8>] (dump_backtrace+0x0/0x10c) from [<c03294e8>] (dump_stack+0x18/0x1c) r6:cf8bde10 r5:cf83d1c0 r4:cf8bde10 r3:cf83d1c0 [<c03294d0>] (dump_stack+0x0/0x1c) from [<c018926c>] (spin_dump+0x84/0x98) [<c01891e8>] (spin_dump+0x0/0x98) from [<c0189460>] (do_raw_spin_lock+0x100/0x198) [<c0189360>] (do_raw_spin_lock+0x0/0x198) from [<c032cbac>] (_raw_spin_lock+0x3c/0x44) [<c032cb70>] (_raw_spin_lock+0x0/0x44) from [<c01c9224>] (pl011_console_write+0xe8/0x11c) [<c01c913c>] (pl011_console_write+0x0/0x11c) from [<c002aea8>] (call_console_drivers.clone.7+0xdc/0x104) [<c002adcc>] (call_console_drivers.clone.7+0x0/0x104) from [<c002b320>] (console_unlock+0x2e8/0x454) [<c002b038>] (console_unlock+0x0/0x454) from [<c002b8b4>] (vprintk_emit+0x2d8/0x594) [<c002b5dc>] (vprintk_emit+0x0/0x594) from [<c0329718>] (printk+0x3c/0x44) [<c03296dc>] (printk+0x0/0x44) from [<c002929c>] (warn_slowpath_common+0x28/0x6c) [<c0029274>] (warn_slowpath_common+0x0/0x6c) from [<c0029304>] (warn_slowpath_null+0x24/0x2c) [<c00292e0>] (warn_slowpath_null+0x0/0x2c) from [<c0070ab0>] (lockdep_trace_alloc+0xd8/0xf0) [<c00709d8>] (lockdep_trace_alloc+0x0/0xf0) from [<c00c0850>] (kmem_cache_alloc+0x24/0x11c) [<c00c082c>] (kmem_cache_alloc+0x0/0x11c) from [<c00bb044>] (__get_vm_area_node.clone.24+0x7c/0x16c) [<c00bafc8>] (__get_vm_area_node.clone.24+0x0/0x16c) from [<c00bb7b8>] (get_vm_area_caller+0x48/0x54) [<c00bb770>] (get_vm_area_caller+0x0/0x54) from [<c0020064>] (__alloc_remap_buffer.clone.15+0x38/0xb8) [<c002002c>] (__alloc_remap_buffer.clone.15+0x0/0xb8) from [<c0020244>] (__dma_alloc+0x160/0x2c8) [<c00200e4>] (__dma_alloc+0x0/0x2c8) from [<c00204d8>] (arm_dma_alloc+0x88/0xa0)[<c0020450>] (arm_dma_alloc+0x0/0xa0) from [<c00beb00>] (dma_pool_alloc+0xcc/0x1a8) [<c00bea34>] (dma_pool_alloc+0x0/0x1a8) from [<c01a9d14>] (pl08x_fill_llis_for_desc+0x28/0x568) [<c01a9cec>] (pl08x_fill_llis_for_desc+0x0/0x568) from [<c01aab8c>] (pl08x_prep_slave_sg+0x258/0x3b0) [<c01aa934>] (pl08x_prep_slave_sg+0x0/0x3b0) from [<c01c9f74>] (pl011_dma_tx_refill+0x140/0x288) [<c01c9e34>] (pl011_dma_tx_refill+0x0/0x288) from [<c01ca748>] (pl011_start_tx+0xe4/0x120) [<c01ca664>] (pl011_start_tx+0x0/0x120) from [<c01c54a4>] (__uart_start+0x48/0x4c) [<c01c545c>] (__uart_start+0x0/0x4c) from [<c01c632c>] (uart_start+0x2c/0x3c) [<c01c6300>] (uart_start+0x0/0x3c) from [<c01c795c>] (uart_write+0xcc/0xf4) [<c01c7890>] (uart_write+0x0/0xf4) from [<c01b0384>] (n_tty_write+0x1c0/0x3e4) [<c01b01c4>] (n_tty_write+0x0/0x3e4) from [<c01acfe8>] (tty_write+0x144/0x240) [<c01acea4>] (tty_write+0x0/0x240) from [<c01ad17c>] (redirected_tty_write+0x98/0xac) [<c01ad0e4>] (redirected_tty_write+0x0/0xac) from [<c00c371c>] (vfs_write+0xbc/0x150) [<c00c3660>] (vfs_write+0x0/0x150) from [<c00c39c0>] (sys_write+0x4c/0x78) [<c00c3974>] (sys_write+0x0/0x78) from [<c0014460>] (ret_fast_syscall+0x0/0x3c) This happens because the DMA allocation code is not respecting atomic allocations correctly. GFP flags should not be tested for GFP_ATOMIC to determine if an atomic allocation is being requested. GFP_ATOMIC is not a flag but a value. The GFP bitmask flags are all prefixed with __GFP_. The rest of the kernel tests for __GFP_WAIT not being set to indicate an atomic allocation. We need to do the same. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>