summaryrefslogtreecommitdiff
path: root/arch/arm/mm
AgeCommit message (Collapse)Author
2013-06-10ARM: dma-mapping: Add new API dma_ops->map_pages()Hiroshi Doyu
Add new API dma_ops->map_pages() for performance bug 1286500 Change-Id: Ib8bbcad53024225173be765358af03d0961f8af0 (cherry picked from commit 1e3b6ee46a5defaa8e1fcc97fc5d9b619c481c41) Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/234137 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-06-10ARM: dma-mapping: Round-up IOVA map baseHiroshi Doyu
This is necessary for iova_alloc_at(). On high order allocation, the lower bit of base was ignored, and it returns incorrect IOVA address. bug 1286500 Change-Id: I0be96b97c8036f8a5bc1c35a1c85e04593021a2b (cherry picked from commit from 578a5333d43b2c9a78f0a234d391c2f8f5382b5d) Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/234136 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-06-10ARM: dma-mapping: Add arm_iommu_detach_device()Hiroshi Doyu
Need the counter part of arm_iommu_attach_device(). bug 1286500 Change-Id: I7663075ba56e0cf7a0762927247bfb5b884cd750 (cherry picked from commit 96425941ba18e0aa68e22cdd476bd3e521aa8256) Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/234133 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2013-05-06arm: mm: cpa: Fix redundant L2 flushes on t11xKrishna Reddy
Bug 1198897 Change-Id: I2099a4ee8660fc8333cb9a6c54b9329a7e66d4e8 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/172090 (cherry picked from commit 55059981bc97372c34bed0901feada4ad9054f65) Reviewed-on: http://git-master/r/224595 Reviewed-by: Abhinav Sinha <absinha@nvidia.com> GVS: Gerrit_Virtual_Submit
2013-04-30Revert "Revert "arm: errata: Workaround for Cortex-A15 erratum 798181 ↵Bo Yan
(TLBI/DSB operations)"" This reverts commit c7cc6aa56f184389203f380df1e39e94e2e2d6f5. Change-Id: Ia481c2f0d9f49e7f05eb2b3fe9a65d7cb4302326 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/222643 GVS: Gerrit_Virtual_Submit Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2013-04-30Revert "Revert "Revert "ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ↵Bo Yan
ASID-capable CPUs""" This reverts commit 450e0659ea22b0122c8f48acd2f058a46486b826. Change-Id: I88a1148a0293d7c95b2af18ae4f42e97e7572f02 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/222642 GVS: Gerrit_Virtual_Submit Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2013-04-10arm: mm: Fix merge issue with 3.4.35Prashant Gaikwad
Change-Id: I43f2a3e267307e532eeb109714f53386213193b4 Signed-off-by: Prashant Gaikwad <pgaikwad@nvidia.com> Reviewed-on: http://git-master/r/216913 (cherry picked from commit 87a811d83af81872fa0e98a5184cb4729a4abd78) Reviewed-on: http://git-master/r/218138 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
2013-04-08Revert "Revert "ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable ↵Bo Yan
CPUs"" This reverts commit 5d04ad58c35de6289072aad40cdc90abf8534faf. Change-Id: I73c136e3b0c5e7eb329fe2264b95a003f77cae52 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/217077 Reviewed-by: Gary Zhang <garyz@nvidia.com> Tested-by: Joshua Widen <jwiden@nvidia.com>
2013-04-08Revert "arm: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB ↵Bo Yan
operations)" This reverts commit e11ccb30b44fc55ba0576f5082e5e17e9a1d1854. Change-Id: Ic96a1b8629778470de0fea1df9bac950ab98bf1f Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/217076 Reviewed-by: Gary Zhang <garyz@nvidia.com> Tested-by: Joshua Widen <jwiden@nvidia.com>
2013-04-03common: DMA-mapping: add DMA_ATTR_SKIP_FREE_IOVA attributeHiroshi Doyu
This patch adds DMA_ATTR_SKIP_CPU_SYNC attribute to the DMA-mapping subsystem. This is the counter part of map_page_at() which just maps the pre-allocate iova to a page. With this attribute, unmap_page() unmap the link between iova and a page, leaving iova allocated. bug 1235233 bug 1263718 Change-Id: Id5535b73e0ca212a045dd0b0ff57de8432e7cf13 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/204468 (cherry picked from commit 77374aee027c51c4e887eaaa3e6b8540f9f6ea87) Reviewed-on: http://git-master/r/215465 Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com> Tested-by: Sandeep Shinde <sashinde@nvidia.com>
2013-04-02arm: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)Catalin Marinas
On Cortex-A15 (r0p0..r3p2) the TLBI/DSB are not adequately shooting down all use of the old entries. This patch implements the erratum workaround which consists of: 1. Dummy TLBIMVAIS and DSB on the CPU doing the TLBI operation. 2. Send IPI to the CPUs that are running the same mm (and ASID) as the one being invalidated (or all the online CPUs for global pages). 3. CPU receiving the IPI executes a DMB and CLREX (part of the exception return code already). The switch_mm() code includes a DMB operation since the IPI is only sent to CPUs running the same ASID. Change-Id: Ideb7f479910f7d4bf25182c84eb5e71691c42a93 Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/209830 Reviewed-by: Mrutyunjay Sawant <msawant@nvidia.com> Tested-by: Mrutyunjay Sawant <msawant@nvidia.com>
2013-04-02Revert "ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUs"Bo Yan
This reverts commit 7fec1b57b8a925d83c194f995f83d9f8442fd48e. Change-Id: I3e2a4ed4e3dcb52368ec42e10819316078323ea4 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/209829 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2013-04-02Revert "ARM: Remove current_mm per-cpu variable"Bo Yan
This reverts commit e323969ccda2d69f02e047c08b03faa09215c72a. Change-Id: I0f44f33b4848ec8e326bd356545903ca14d0da9a Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/209827 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2013-04-02Revert "ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks"Russell King
This reverts commit 45b95235b0ac86cef2ad4480b0618b8778847479. Will Deacon reports that: In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID") I updated the ASID rollover code to use only the kernel page tables whilst updating the ASID. Unfortunately, the code to restore the user page tables was part of a later patch which isn't yet in mainline, so this leaves the code quite broken. We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW from ARM, so lets revert these until we can properly sort out what we're doing with the context switching. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> (cherry picked from commit a0a54d37b4b1d1f55d1e81e8ffc223bb85472fa3) Change-Id: Id3bd7c795bb84269b646e6a1344d1974d85bf094 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/209825 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2013-03-26ARM errata: Writing ACTLR.SMP when the L2 cache has been idle for an ↵Bo Yan
extended period may not work correctly This workaround is for ARM errata 799270 which is applicable to Cortex-A15 up to revision R2P4. The workaround is to read from a device register and create a data dependency between this read and the modification of ACTLR. Change-Id: I26813f17a8a9c6a90446ddeb943ef318e3c69770 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/212770 Reviewed-by: Bobby Meeker <bmeeker@nvidia.com>
2013-03-06Merge branch 'linux-3.4.35' into rel-17Sachin Nikam
Bug 1243631 Change-Id: I915826047b2e20f0ad0a7d75df295c6cbf6e5b0a
2013-02-06ARM: mm: Skip I-cache invalidate for Cortex-A15 bootBo Yan
This is not required since cache is invalidated by HW in the reset sequence. Bootloader is supposed to do the same before it hands over control to kernel. Change-Id: I0991de3ba1015a32f2c49a0333fd0b17a51a4f31 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/197028 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> Reviewed-by: Antti Miettinen <amiettinen@nvidia.com> Tested-by: Antti Miettinen <amiettinen@nvidia.com>
2013-02-04ARM: mm: Remove unnecessary CMO in Cortex A15 startupBo Yan
Cortex-A15 flush L2 cache after reset, there is no need to do this in software, if L2 is already invalidated in bootloader and cache is disabled. For secondary startup, there is no reason to flush L2 as well. This change assumes the setup code is always entered as the result of CPU reset. Change-Id: I6d58f8b4a638b70acfb35b97c87a09266aceef41 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/170563 (cherry picked from commit 26e7a8ea22abe09852fa1ce36b6cec8dc8fc5978) Reviewed-on: http://git-master/r/196044 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Aleksandr Frid <afrid@nvidia.com>
2013-02-03ARM: DMA: Fix struct page iterator in dma_cache_maint() to work with sparsememRussell King
commit 15653371c67c3fbe359ae37b720639dd4c7b42c5 upstream. Subhash Jadavani reported this partial backtrace: Now consider this call stack from MMC block driver (this is on the ARMv7 based board): [<c001b50c>] (v7_dma_inv_range+0x30/0x48) from [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) from [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) from [<c0017ff8>] (dma_map_sg+0x3c/0x114) This is caused by incrementing the struct page pointer, and running off the end of the sparsemem page array. Fix this by incrementing by pfn instead, and convert the pfn to a struct page. Suggested-by: James Bottomley <JBottomley@Parallels.com> Tested-by: Subhash Jadavani <subhashj@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-16ARM: tegra11x: Disable L2 prefetch throttleBo Yan
Disable A15's L2 prefetch throttle mechanism to improve performance. bug 1212902 Change-Id: I03a27518b26da3cbf1e7aad96ff2d67187a1ebf6 Reviewed-on: http://git-master/r/188788 (cherry picked from commit 246ebe8a6a2761a453341478599d66549fc9b734) Signed-off-by: Bo Yan <byan@nvidia.com> Change-Id: If9fbd7043600c092c3aa23020e5500276fca6ab6 Reviewed-on: http://git-master/r/191049 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit
2013-01-11ARM: mm: use pteval_t to represent page protection valuesWill Deacon
commit 864aa04cd02979c2c755cb28b5f4fe56039171c0 upstream. When updating the page protection map after calculating the user_pgprot value, the base protection map is temporarily stored in an unsigned long type, causing truncation of the protection bits when LPAE is enabled. This effectively means that calls to mprotect() will corrupt the upper page attributes, clearing the XN bit unconditionally. This patch uses pteval_t to store the intermediate protection values, preserving the upper bits for 64-bit descriptors. Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-10ARM: tegra11x: start L2 clock before enabling SMPBo Yan
Do an external device read to start L2 clock, then change SMP bit in ACTLR. The ACTLR change needs to be done immediately after the device read is done since there are only 256 clock cycles maximum available before the L2 clock can be gated again. bug 1208654 bug 1195192 Change-Id: Ide1c0476d629cbea07f585013ed3b7e79a67c86e Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/189712 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Tested-by: Sang-Hun Lee <sanlee@nvidia.com>
2012-12-05ARM: mm: restore counter enable registerBo Yan
Change-Id: I2433e53175e79d558d76a7c37b10de9175d7b1b0 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/167385 Reviewed-by: Simone Willett <swillett@nvidia.com> Tested-by: Simone Willett <swillett@nvidia.com>
2012-12-04ARM: mm: Enable NCSE feature for A15 onlyBo Yan
Change-Id: If966ee69f1d5e4314f79685238ecff3c44eadac0 Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/167879 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2012-12-04arm: mm: cpa: Fix warnings in cpa code.Krishna Reddy
Change-Id: I4b9faea7252399c598619460dee27990ad6474ca Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/163722 Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Automatic_Commit_Validation_User
2012-11-30ARM: mm: enable non-cacheable streaming enhancementBo Yan
This is cortex-a15 specific bug 1178938 Change-Id: Id695d89dbe1411d277f2c1296c74586ca9c1584e Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/164168 Reviewed-by: Mrutyunjay Sawant <msawant@nvidia.com> Tested-by: Mrutyunjay Sawant <msawant@nvidia.com>
2012-11-28ARM: mm: enable ARM_ERRATA_752520 WAR only for r2p0 to r2p8Vishal Singh
The work around for ARM erratum 752520 was applied for all revisions upto r2p8. But this erratum is present only on r2p0 to r2p8 versions while T20 has r1p1 revision. Making this change to enable the WAR only for revisions from r2p0 to r2p8. Bug 853428 Bug 1045637 Reviewed-on: http://git-master/r/43962 (cherry picked from commit 57a0028d94c7ad71acab0c9ee29f5472e46c55bf) Reviewed-on: http://git-master/r/44540 (cherry picked from commit d7f06b0a1b247f2a1444b3b78bc7dc8b21a5b7dd) Reviewed-on: http://git-master/r/161949 (cherry picked from commit f84777eadee307e605f3accdfbf7114917e5a51c) Change-Id: Id3ab36cb757d45ab9bddfa5b08c0643a00765bb2 Signed-off-by: Vishal Annapurve <vannapurve@nvidia.com> Signed-off-by: Vishal Singh <vissingh@nvidia.com> Reviewed-on: http://git-master/r/165948 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sumeet Gupta <sumeetg@nvidia.com> Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com>
2012-11-12ARM: mm: save/restore some PMU registersBo Yan
Specifically, this change saves and restores registers controlling user space access of ARM performance monitoring unit registers and for PMU interrupt enables. Change-Id: Iac88df17112e2ef2ccf53674c3fa3a74d2d4221f Signed-off-by: Bo Yan <byan@nvidia.com> Reviewed-on: http://git-master/r/162149 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com> GVS: Gerrit_Virtual_Submit
2012-11-07arm: mm: cpa: Configurable cache_maint_{inner,outer}_thresholdHiroshi Doyu
Introduce configurable cache_maint_{inner,outer}_threshold via debugfs. Bug 1158336 Change-Id: I7bb94adadbc41ff65dbd9992920c938df2449b06 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Reviewed-on: http://git-master/r/161209 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2012-10-31Revert "HACK: arm: mm: Disable Freeing init memory."Deepak Nibade
This hack was introduced in Main to enable booting of dalmore/pluto. Revert this hack because it is no longer needed. Bug 1166538 This reverts commit 1fba4c801566639db74d117f1233bb790d1748aa. Change-Id: I950f056d254e32f8e49967c015bf3ecea44c19db Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/159867 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2012-10-29ARM: mm: rename jump labels in v7_flush_dcache_all functionLorenzo Pieralisi
This patch renames jump labels in v7_flush_dcache_all in order to define a specific flush cache levels entry point. Change-Id: If84ff442617cec67419dbc75fe1c6daa153ce537 Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Shawn Guo <shawn.guo@linaro.org> Reviewed-on: http://git-master/r/147782 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-by: Bo Yan <byan@nvidia.com> Tested-by: Bo Yan <byan@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2012-10-29ARM: mm: implement LoUIS API for cache maintenance opsLorenzo Pieralisi
ARM v7 architecture introduced the concept of cache levels and related control registers. New processors like A7 and A15 embed an L2 unified cache controller that becomes part of the cache level hierarchy. Some operations in the kernel like cpu_suspend and __cpu_disable do not require a flush of the entire cache hierarchy to DRAM but just the cache levels belonging to the Level of Unification Inner Shareable (LoUIS), which in most of ARM v7 systems correspond to L1. The current cache flushing API used in cpu_suspend and __cpu_disable, flush_cache_all(), ends up flushing the whole cache hierarchy since for v7 it cleans and invalidates all cache levels up to Level of Coherency (LoC) which cripples system performance when used in hot paths like hotplug and cpuidle. Therefore a new kernel cache maintenance API must be added to cope with latest ARM system requirements. This patch adds flush_cache_louis() to the ARM kernel cache maintenance API. This function cleans and invalidates all data cache levels up to the Level of Unification Inner Shareable (LoUIS) and invalidates the instruction cache for processors that support it (> v7). This patch also creates an alias of the cache LoUIS function to flush_kern_all for all processor versions prior to v7, so that the current cache flushing behaviour is unchanged for those processors. v7 cache maintenance code implements a cache LoUIS function that cleans and invalidates the D-cache up to LoUIS and invalidates the I-cache, according to the new API. Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Shawn Guo <shawn.guo@linaro.org> Change-Id: Id758abd2f67ee46da91e8372cd3a09d6ae3a2608 Reviewed-on: http://git-master/r/147781 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-by: Bo Yan <byan@nvidia.com> Tested-by: Bo Yan <byan@nvidia.com> Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2012-10-21ARM: 7541/1: Add ARM ERRATA 775420 workaroundSimon Horman
commit 7253b85cc62d6ff84143d96fe6cd54f73736f4d7 upstream. arm: Add ARM ERRATA 775420 workaround Workaround for the 775420 Cortex-A9 (r2p2, r2p6,r2p8,r2p10,r3p0) erratum. In case a date cache maintenance operation aborts with MMU exception, it might cause the processor to deadlock. This workaround puts DSB before executing ISB if an abort may occur on cache maintenance. Based on work by Kouei Abe and feedback from Catalin Marinas. Signed-off-by: Kouei Abe <kouei.abe.cp@rms.renesas.com> [ horms@verge.net.au: Changed to implementation suggested by catalin.marinas@arm.com ] Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-10-10arm: mm: cpa: Use config option to enable/disable cache flush by set/waysKrishna Reddy
Reviewed-on: http://git-master/r/133635 (cherry picked from commit 644a8fcb8be8f6a6a2f882d854974fce40e2c744) Change-Id: I00d12c1a40a56d300396538080eefc68c6ccca9e Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/143108 Reviewed-by: Rohan Somvanshi <rsomvanshi@nvidia.com> Tested-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
2012-10-10HACK: arm: mm: Disable Freeing init memory.Krishna Reddy
Reviewed-on: http://git-master/r/133170 (cherry picked from commit 44f72b181421900ae61eb165477ba104a359f9e8) Change-Id: I366109e0991cf10cc4ff9fc8bc4140c67cfda5bb Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Signed-off-by: Deepak Nibade <dnibade@nvidia.com> Reviewed-on: http://git-master/r/143061 Reviewed-by: Simone Willett <swillett@nvidia.com> Tested-by: Simone Willett <swillett@nvidia.com>
2012-10-02ARM: cache-l2x0: get size of outer cacheKirill Artamonov
Implement interface for getting size of outer cache. bug 983964 Change-Id: If855f32d3eaa4c673c132b1964a46fe1c15b4dfe Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com> Reviewed-on: http://git-master/r/140024 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Peter De Schrijver <pdeschrijver@nvidia.com> Reviewed-by: Sachin Nikam <snikam@nvidia.com> Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
2012-10-02ARM: Fix ioremap() of address zeroRussell King
commit a849088aa1552b1a28eea3daff599ee22a734ae3 upstream. Murali Nalajala reports a regression that ioremapping address zero results in an oops dump: Unable to handle kernel paging request at virtual address fa200000 pgd = d4f80000 [fa200000] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT SMP ARM Modules linked in: CPU: 0 Tainted: G W (3.4.0-g3b5f728-00009-g638207a #13) PC is at msm_pm_config_rst_vector_before_pc+0x8/0x30 LR is at msm_pm_boot_config_before_pc+0x18/0x20 pc : [<c0078f84>] lr : [<c007903c>] psr: a0000093 sp : c0837ef0 ip : cfe00000 fp : 0000000d r10: da7efc17 r9 : 225c4278 r8 : 00000006 r7 : 0003c000 r6 : c085c824 r5 : 00000001 r4 : fa101000 r3 : fa200000 r2 : c095080c r1 : 002250fc r0 : 00000000 Flags: NzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment kernel Control: 10c5387d Table: 25180059 DAC: 00000015 [<c0078f84>] (msm_pm_config_rst_vector_before_pc+0x8/0x30) from [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) from [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) from [<c007b17c>] (arch_idle+0x294/0x3e0) [<c007b17c>] (arch_idle+0x294/0x3e0) from [<c000eed8>] (default_idle+0x18/0x2c) [<c000eed8>] (default_idle+0x18/0x2c) from [<c000f254>] (cpu_idle+0x90/0xe4) [<c000f254>] (cpu_idle+0x90/0xe4) from [<c057231c>] (rest_init+0x88/0xa0) [<c057231c>] (rest_init+0x88/0xa0) from [<c07ff890>] (start_kernel+0x3a8/0x40c) Code: c0704256 e12fff1e e59f2020 e5923000 (e5930000) This is caused by the 'reserved' entries which we insert (see 19b52abe3c5d7 - ARM: 7438/1: fill possible PMD empty section gaps) which get matched for physical address zero. Resolve this by marking these reserved entries with a different flag. Tested-by: Murali Nalajala <mnalajal@codeaurora.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-14ARM: 7489/1: errata: fix workaround for erratum #720789 on UP systemsWill Deacon
commit 730a8128cd8978467eb1cf546b11014acb57d433 upstream. Commit 5a783cbc4836 ("ARM: 7478/1: errata: extend workaround for erratum #720789") added workarounds for erratum #720789 to the range TLB invalidation functions with the observation that the erratum only affects SMP platforms. However, when running an SMP_ON_UP kernel on a uniprocessor platform we must take care to preserve the ASID as the workaround is not required. This patch ensures that we don't set the ASID to 0 when flushing the TLB on such a system, preserving the original behaviour with the workaround disabled. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-14ARM: 7487/1: mm: avoid setting nG bit for user mappings that aren't presentWill Deacon
commit 47f1204329237a0f8655f5a9f14a38ac81946ca1 upstream. Swap entries are encoding in ptes such that !pte_present(pte) and pte_file(pte). The remaining bits of the descriptor are used to identify the swapfile and offset within it to the swap entry. When writing such a pte for a user virtual address, set_pte_at unconditionally sets the nG bit, which (in the case of LPAE) will corrupt the swapfile offset and lead to a BUG: [ 140.494067] swap_free: Unused swap offset entry 000763b4 [ 140.509989] BUG: Bad page map in process rs:main Q:Reg pte:0ec76800 pmd:8f92e003 This patch fixes the problem by only setting the nG bit for user mappings that are actually present. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-09-07ARM: dma-mapping: Avoid to overwrite a mapHiroshi Doyu
Avoid to overwrite a map, and warn it. Change-Id: Ieef971bf8dc7e9719445b7b253daa55f4c109ae2 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: New dma_map_ops->map_page*_at* functionHiroshi Doyu
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: New dma_map_ops->iova_alloc*_at* functionHiroshi Doyu
To allocate IOVA area at specified address Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: New dma_map_ops->iova_{alloc,free}() functionsHiroshi Doyu
There are some cases that IOVA allocation and mapping have to be done seperately, especially for perf optimization reasons. This patch allows client modules to {alloc,free} IOVA space without backing up actual pages for that area. Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: New dma_map_ops->iova_get_free_{total,max} functionsHiroshi Doyu
->iova>_get_free_total() returns the sum of available free areas. ->iova>_get_free_max() returns the largest available free area size. Change-Id: I817ff46bea799a809e2e6b163b2e86cb737cf077 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: Skip cache maint for invalid pageHiroshi Doyu
Skip cache maint for invalid page but warn it. Change-Id: I780587d132fd9440767046877c0724939897889f Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: Refrain noisy console messageHiroshi Doyu
With many IOMMU'able devices, console gets noisy. Tegra30 has a few dozen of IOMMU'able devices. Change-Id: I031fd6b5e88d04d3d2ab117677ca9ab2535ba475 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: Remove unsed var at arm_coherent_iommu_unmap_pageHiroshi Doyu
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: Small logical clean upHiroshi Doyu
Skip unnecessary operations if order == 0. A little bit easier to read. Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: IOMMU allocates pages from atomic_pool with GFP_ATOMICHiroshi Doyu
Make use of the same atomic pool as DMA does, and skip a kernel page mapping which can involve sleep'able operations at allocating a kernel page table. Change-Id: I72e62f521d2d357621aa5a7fd6f513ec941a49e2 Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
2012-09-07ARM: dma-mapping: Introduce __atomic_get_pages() for __iommu_get_pages()Hiroshi Doyu
Support atomic allocation in __iommu_get_pages(). Change-Id: I16f94a0ad9a54ffc53bac4e53c090336c4b5560d Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>