summaryrefslogtreecommitdiff
path: root/include/linux/swiotlb.h
AgeCommit message (Collapse)Author
2014-06-04x86: enable DMA CMA with swiotlbAkinobu Mita
The DMA Contiguous Memory Allocator support on x86 is disabled when swiotlb config option is enabled. So DMA CMA is always disabled on x86_64 because swiotlb is always enabled. This attempts to support for DMA CMA with enabling swiotlb config option. The contiguous memory allocator on x86 is integrated in the function dma_generic_alloc_coherent() which is .alloc callback in nommu_dma_ops for dma_alloc_coherent(). x86_swiotlb_alloc_coherent() which is .alloc callback in swiotlb_dma_ops tries to allocate with dma_generic_alloc_coherent() firstly and then swiotlb_alloc_coherent() is called as a fallback. The main part of supporting DMA CMA with swiotlb is that changing x86_swiotlb_free_coherent() which is .free callback in swiotlb_dma_ops for dma_free_coherent() so that it can distinguish memory allocated by dma_generic_alloc_coherent() from one allocated by swiotlb_alloc_coherent() and release it with dma_generic_free_coherent() which can handle contiguous memory. This change requires making is_swiotlb_buffer() global function. This also needs to change .free callback in the dma_map_ops for amd_gart and sta2x11, because these dma_ops are also using dma_generic_alloc_coherent(). Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Don Dutile <ddutile@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-17x86, kdump: Set crashkernel_low automaticallyYinghai Lu
Chao said that kdump does does work well on his system on 3.8 without extra parameter, even iommu does not work with kdump. And now have to append crashkernel_low=Y in first kernel to make kdump work. We have now modified crashkernel=X to allocate memory beyong 4G (if available) and do not allocate low range for crashkernel if the user does not specify that with crashkernel_low=Y. This causes regression if iommu is not enabled. Without iommu, swiotlb needs to be setup in first 4G and there is no low memory available to second kernel. Set crashkernel_low automatically if the user does not specify that. For system that does support IOMMU with kdump properly, user could specify crashkernel_low=0 to save that 72M low ram. -v3: add swiotlb_size() according to Konrad. -v4: add comments what 8M is for according to hpa. also update more crashkernel_low= in kernel-parameters.txt -v5: update changelog according to Vivek. -v6: Change description about swiotlb referring according to HATAYAMA. Reported-by: WANG Chao <chaowang@redhat.com> Tested-by: WANG Chao <chaowang@redhat.com> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1366089828-19692-2-git-send-email-yinghai@kernel.org Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-01-29x86: Don't panic if can not alloc buffer for swiotlbYinghai Lu
Normal boot path on system with iommu support: swiotlb buffer will be allocated early at first and then try to initialize iommu, if iommu for intel or AMD could setup properly, swiotlb buffer will be freed. The early allocating is with bootmem, and could panic when we try to use kdump with buffer above 4G only, or with memmap to limit mem under 4G. for example: memmap=4095M$1M to remove memory under 4G. According to Eric, add _nopanic version and no_iotlb_memory to fail map single later if swiotlb is still needed. -v2: don't pass nopanic, and use -ENOMEM return value according to Eric. panic early instead of using swiotlb_full to panic...according to Eric/Konrad. -v3: make swiotlb_init to be notpanic, but will affect: arm64, ia64, powerpc, tile, unicore32, x86. -v4: cleanup swiotlb_init by removing swiotlb_init_with_default_size. Suggested-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1359058816-7615-36-git-send-email-yinghai@kernel.org Reviewed-and-tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Andrzej Pietrasiewicz <andrzej.p@samsung.com> Cc: linux-mips@linux-mips.org Cc: xen-devel@lists.xensource.com Cc: virtualization@lists.linux-foundation.org Cc: Shuah Khan <shuahkhan@gmail.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2012-10-30swiotlb: Do not export swiotlb_bounce since there are no external consumersAlexander Duyck
Currently swiotlb is the only consumer for swiotlb_bounce. Since that is the case it doesn't make much sense to be exporting it so make it a static function only. In addition we can save a few more lines of code by making it so that it accepts the DMA address as a physical address instead of a virtual one. This is the last piece in essentially pushing all of the DMA address values to use physical addresses in swiotlb. In order to clarify things since we now have 2 physical addresses in use inside of swiotlb_bounce I am renaming phys to orig_addr, and dma_addr to tlb_addr. This way is should be clear that orig_addr is contained within io_orig_addr and tlb_addr is an address within the io_tlb_addr buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-10-30swiotlb: Use physical addresses instead of virtual in swiotlb_tbl_sync_singleAlexander Duyck
This change makes it so that the sync functionality also uses physical addresses. This helps to further reduce the use of virt_to_phys and phys_to_virt functions. In order to clarify things since we now have 2 physical addresses in use inside of swiotlb_tbl_sync_single I am renaming phys to orig_addr, and dma_addr to tlb_addr. This way is should be clear that orig_addr is contained within io_orig_addr and tlb_addr is an address within the io_tlb_addr buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-10-30swiotlb: Use physical addresses for swiotlb_tbl_unmap_singleAlexander Duyck
This change makes it so that the unmap functionality also uses physical addresses. This helps to further reduce the use of virt_to_phys and phys_to_virt functions. In order to clarify things since we now have 2 physical addresses in use inside of swiotlb_tbl_unmap_single I am renaming phys to orig_addr, and dma_addr to tlb_addr. This way is should be clear that orig_addr is contained within io_orig_addr and tlb_addr is an address within the io_tlb_addr buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-10-30swiotlb: Return physical addresses when calling swiotlb_tbl_map_singleAlexander Duyck
This change makes it so that swiotlb_tbl_map_single will return a physical address instead of a virtual address when called. The advantage to this once again is that we are avoiding a number of virt_to_phys and phys_to_virt translations by working with everything as a physical address. One change I had to make in order to support using physical addresses is that I could no longer trust 0 to be a invalid physical address on all platforms. So instead I made it so that ~0 is returned on error. This should never be a valid return value as it implies that only one byte would be available for use. In order to clarify things since we now have 2 physical addresses in use inside of swiotlb_tbl_map_single I am renaming phys to orig_addr, and dma_addr to tlb_addr. This way is should be clear that orig_addr is contained within io_orig_addr and tlb_addr is an address within the io_tlb_addr buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-08-21swiotlb: add the late swiotlb initialization function with iotlb memoryKonrad Rzeszutek Wilk
This enables the caller to initialize swiotlb with its own iotlb memory late in the bootup. See git commit eb605a5754d050a25a9f00d718fb173f24c486ef "swiotlb: add swiotlb_tbl_map_single library function" which will explain the full details of what it can be used for. CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> [v1: Fold in smatch warning] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2011-12-06swiotlb: Expose swiotlb_nr_tlb function to modulesKonrad Rzeszutek Wilk
As a mechanism to detect whether SWIOTLB is enabled or not. We also fix the spelling - it was swioltb instead of swiotlb. CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> [v1: Ripped out swiotlb_enabled] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2011-06-06swiotlb: Export swioltb_nr_tbl and utilize it as appropiate.FUJITA Tomonori
By default the io_tlb_nslabs is set to zero, and gets set to whatever value is passed in via swiotlb_init_with_tbl function. The default value passed in is 64MB. However, if the user provides the 'swiotlb=<nslabs>' the default value is ignored and the value provided by the user is used... Except when the SWIOTLB is used under Xen - there the default value of 64MB is used and the Xen-SWIOTLB has no mechanism to get the 'io_tlb_nslabs' filled out by setup_io_tlb_npages functions. This patch provides a function for the Xen-SWIOTLB to call to see if the io_tlb_nslabs is set and if so use that value. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2010-06-07swiotlb: Make swiotlb bookkeeping functions visible in the header file.Konrad Rzeszutek Wilk
We put the functions dealing with the operations on the SWIOTLB buffer in the header and make those functions non-static. And also make the functions exported via EXPORT_SYMBOL_GPL. See "swiotlb: swiotlb: add swiotlb_tbl_map_single library function" for full description of patchset. [v2: swiotlb_sync_single_range_for_* no more. Remove usage.] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Albert Herranz <albert_herranz@yahoo.es>
2010-06-07swiotlb: search and replace "int dir" with "enum dma_data_direction dir"Konrad Rzeszutek Wilk
.. to catch anybody doing something funky. See "swiotlb: swiotlb: add swiotlb_tbl_map_single library function" for full description of patchset. [v2: swiotlb_sync_single_range_* no more - removed usage] [v3: enum dma_data_direction direction -> enum dma_data_direction dir] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Albert Herranz <albert_herranz@yahoo.es>
2010-06-07swiotlb: add the swiotlb initialization function with iotlb memoryFUJITA Tomonori
This enables the caller to initialize swiotlb with its own iotlb memory. See "swiotlb: swiotlb: add swiotlb_tbl_map_single library function" for full description of patchset. [v2: changed ..with_tlb to ..with_tbl] Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Albert Herranz <albert_herranz@yahoo.es>
2010-05-27swiotlb: remove unnecessary swiotlb_sync_single_range_*FUJITA Tomonori
swiotlb_sync_single_range_for_cpu and swiotlb_sync_single_range_for_device are unnecessary because swiotlb_sync_single_for_cpu and swiotlb_sync_single_for_device can be used instead. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-11-15swiotlb: Remove duplicate swiotlb_force extern declarationsFUJITA Tomonori
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: tony.luck@intel.com LKML-Reference: <1258199198-16657-4-git-send-email-fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-10swiotlb: Defer swiotlb init printing, export swiotlb_print_info()FUJITA Tomonori
This enables us to avoid printing swiotlb memory info when we initialize swiotlb. After swiotlb initialization, we could find that we don't need swiotlb. This patch removes the code to print swiotlb memory info in swiotlb_init() and exports the function to do that. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: chrisw@sous-sol.org Cc: dwmw2@infradead.org Cc: joerg.roedel@amd.com Cc: muli@il.ibm.com Cc: tony.luck@intel.com Cc: benh@kernel.crashing.org LKML-Reference: <1257849980-22640-9-git-send-email-fujita.tomonori@lab.ntt.co.jp> [ -v2: merge up conflict ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-10swiotlb: Add swiotlb_free() functionFUJITA Tomonori
swiotlb_free() function frees all allocated memory for swiotlb. We need to initialize swiotlb before IOMMU initialization (x86 and powerpc needs to allocate memory from bootmem allocator). If IOMMU initialization is successful, we need to free swiotlb resource (don't want to waste 64MB). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: chrisw@sous-sol.org Cc: dwmw2@infradead.org Cc: joerg.roedel@amd.com Cc: muli@il.ibm.com LKML-Reference: <1257849980-22640-8-git-send-email-fujita.tomonori@lab.ntt.co.jp> [ -v2: build fix for the !CONFIG_SWIOTLB case ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-07-28swiotlb: use phys_to_dma and dma_to_physFUJITA Tomonori
This converts swiotlb to use phys_to_dma and dma_to_phys instead of swiotlb_phys_to_bus() and swiotlb_bus_to_phys(). swiotlb_phys_to_bus() and swiotlb_bus_to_phys() are not necessary so this patch also removes them. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-28swiotlb: remove swiotlb_arch_range_needs_mappingFUJITA Tomonori
Nobody uses swiotlb_arch_range_needs_mapping(). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-28swiotlb: remove unused swiotlb_alloc()FUJITA Tomonori
Nobody uses swiotlb_alloc(). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-28swiotlb: remove unused swiotlb_alloc_boot()FUJITA Tomonori
Nobody uses swiotlb_alloc_boot(). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-04-08swiotlb: change swiotlb_bus_to[phys,virt] prototypesBecky Bruce
Add a hwdev argument that is needed on some architectures in order to access a per-device offset that is taken into account when producing a physical address (also needed to get from bus address to virtual address because the physical address is an intermediate step). Also make swiotlb_bus_to_virt weak so architectures can override it. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Cc: jeremy@goop.org Cc: ian.campbell@citrix.com LKML-Reference: <1239199761-22886-8-git-send-email-galak@kernel.crashing.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-11swiotlb: range_needs_mapping should take a physical address.Ian Campbell
The swiotlb_arch_range_needs_mapping() hook should take a physical address rather than a virtual address in order to support highmem pages. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-06x86, ia64: remove duplicated swiotlb codeFUJITA Tomonori
This adds swiotlb_map_page and swiotlb_unmap_page to lib/swiotlb.c and remove IA64 and X86's swiotlb_map_page and swiotlb_unmap_page. This also removes unnecessary swiotlb_map_single, swiotlb_map_single_attrs, swiotlb_unmap_single and swiotlb_unmap_single_attrs. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-01-06x86, ia64: convert to use generic dma_map_ops structFUJITA Tomonori
This converts X86 and IA64 to use include/linux/dma-mapping.h. It's a bit large but pretty boring. The major change for X86 is converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping operations. The major changes for IA64 is using map_page and unmap_page instead of map_single and unmap_single. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-28swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()Jeremy Fitzhardinge
Impact: extend functions with a (yet unused) parameter, update callsites Some architectures need it - in preparation for highmem swiotlb. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: add arch hook to force mappingIan Campbell
Impact: generalize the sw-IOTLB range checks Some architectures require special rules to determine whether a range needs mapping or not. This adds a weak function for architectures to override. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-17swiotlb: allow architectures to override phys<->bus<->phys conversionsIan Campbell
Impact: generalize phys<->bus<->phys conversions in the swiotlb code Architectures may need to override these conversions. Implement a __weak hook point containing the default implementation. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-16swiotlb: move some definitions to headerIan Campbell
Impact: cleanup Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-12-16swiotlb: allow architectures to override swiotlb pool allocationJeremy Fitzhardinge
Impact: generalize swiotlb allocation code Architectures may need to allocate memory specially for use with the swiotlb. Create the weak function swiotlb_alloc_boot() and swiotlb_alloc() defaulting to the current behaviour. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-16introduce generic header file for the software IO/TLBJoerg Roedel
A series of patches introduce a generic header file for the software IO/TLB implementation in lib/swiotlb.c. Currently each architecture using this code defines the prototypes itself. The prototypes are moved to include/linux/swiotlb.h and this file is included in architecture specific code for X86 and IA64. This patch: Create include/linux/swiotlb.h file which contains all function prototypes for the lib/swiotlb.c file. (akpm: the dependent patches will be trickled through arch trees) Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>