summaryrefslogtreecommitdiff
path: root/arch/arm/mm/mmu.c
AgeCommit message (Collapse)Author
2015-05-14ARM: 8356/1: mm: handle non-pmd-aligned end of RAMMark Rutland
At boot time we round the memblock limit down to section size in an attempt to ensure that we will have mapped this RAM with section mappings prior to allocating from it. When mapping RAM we iterate over PMD-sized chunks, creating these section mappings. Section mappings are only created when the end of a chunk is aligned to section size. Unfortunately, with classic page tables (where PMD_SIZE is 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in size the first 1M will not be mapped despite having been accounted for in the memblock limit. This has been observed to result in page tables being allocated from unmapped memory, causing boot-time hangs. This patch modifies the memblock limit rounding to always round down to PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we will round the memblock limit down to a 2M boundary, matching the limits on section mappings, and preventing allocations from unmapped memory. For LPAE there should be no change as PMD_SIZE == SECTION_SIZE. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Stefan Agner <stefan@agner.ch> Tested-by: Stefan Agner <stefan@agner.ch> Acked-by: Laura Abbott <labbott@redhat.com> Tested-by: Hans de Goede <hdegoede@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-01-07ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem regionGrygorii Strashko
Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represent physical memory range and will be calculated wrongly if LPAE is enabled. As result, all following code in map_lowmem() will not work correctly. For example, Keystone 2 boot is broken because kernel_x_start == 0x0000 0000 kernel_x_end == 0x0080 0000 instead of kernel_x_start == 0x0000 0008 0000 0000 kernel_x_end == 0x0000 0008 0080 0000 and as result whole low memory will be mapped with MT_MEMORY_RW permissions by code (start > kernel_x_end): } else if (start >= kernel_x_end) { map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; map.type = MT_MEMORY_RW; create_mapping(&map); } Hence, fix it by using phys_addr_t type for variables kernel_x_start and kernel_x_end. Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-12-05Merge branch 'devel-stable' into for-nextRussell King
2014-12-03ARM: 8238/1: mm: Refine set_memory_* functionsJungseung Lee
set_memory_* functions have same implementation except memory attribute. This patch makes to use common function for these, and pull out the functions into arch/arm/mm/pageattr.c like arm64 did. It will reduce code size and enhance the readability. Signed-off-by: Jungseung Lee <js07.lee@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-12-03ARM: 8235/1: Support for the PXN CPU feature on ARMv7Jungseung Lee
Modern ARMv7-A/R cores optionally implement below new hardware feature: - PXN: Privileged execute-never(PXN) is a security feature. PXN bit determines whether the processor can execute software from the region. This is effective solution against ret2usr attack. On an implementation that does not include the LPAE, PXN is optionally supported. This patch set PXN bit on user page table for preventing user code execution with privilege mode. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jungseung Lee <js07.lee@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-11-21ARM: convert printk(KERN_* to pr_*Russell King
Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message across several lines, and we also add a few levels to some messages which were previously missing them. Tested-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-11-03Merge tag 'ronx-next' of ↵Russell King
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux into devel-stable generic fixmaps ARM support for CONFIG_DEBUG_RODATA
2014-10-16ARM: mm: allow non-text sections to be non-executableKees Cook
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions into section-sized areas that can have different permisions. Performs the NX permission changes during free_initmem, so that init memory can be reclaimed. This uses section size instead of PMD size to reduce memory lost to padding on non-LPAE systems. Based on work by Brad Spengler, Larry Bassel, and Laura Abbott. Signed-off-by: Kees Cook <keescook@chromium.org> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Nicolas Pitre <nico@linaro.org>
2014-10-16arm: fixmap: implement __set_fixmap()Kees Cook
This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h. Also makes sure that the fixmap allocation fits into the expected range. Based on patch by Rabin Vincent. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Rabin Vincent <rabin@rab.in> Acked-by: Nicolas Pitre <nico@linaro.org>
2014-10-16ARM: expand fixmap region to 3MBRob Herring
With commit a05e54c103b0 ("ARM: 8031/2: change fixmap mapping region to support 32 CPUs"), the fixmap region was expanded to 2MB, but it precluded any other uses of the fixmap region. In order to support other uses the fixmap region needs to be expanded beyond 2MB. Fortunately, the adjacent 1MB range 0xffe00000-0xfff00000 is availabe. Remove fixmap_page_table ptr and lookup the page table via the virtual address so that the fixmap region can span more that one pmd. The 2nd pmd is already created since it is shared with the vector page. Signed-off-by: Rob Herring <robh@kernel.org> [kees: fixed CONFIG_DEBUG_HIGHMEM get_fixmap() calls] [kees: moved pte allocation outside of CONFIG_HIGHMEM] Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Nicolas Pitre <nico@linaro.org>
2014-09-26ARM: 8152/1: Convert pr_warning to pr_warnJoe Perches
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-08-02ARM: add comments to the early page table remap codeRussell King
Add further comments to the early page table remap code to explain what the code is doing, why it is doing it, but more importantly to explain that the code is not architecturally compliant and is squarely in "UNPREDICTABLE" behaviour territory. Add a warning and tainting of the kernel too. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-05Merge branches 'alignment', 'fixes', 'l2c' (early part) and 'misc' into for-nextRussell King
2014-06-02ARM: ensure C page table setup code follows assembly code (part II)Russell King
This does the same as the previous commit, but for the S bit, which also needs to match the initial value which the assembly code used for the same reasons. Again, we add a check for SMP to ensure that the page tables are correctly setup for SMP. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-02ARM: ensure C page table setup code follows assembly codeRussell King
Fix a long standing bug where, for ARMv6+, we don't fully ensure that the C code sets the same cache policy as the assembly code. This was introduced partially by commit 11179d8ca28d ([ARM] 4497/1: Only allow safe cache configurations on ARMv6 and later) and also by adding SMP support. This patch sets the default cache policy based on the flags used by the assembly code, and then ensures that when a cache policy command line argument is used, we verify that on ARMv6, it matches the initial setup. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-02ARM: remove unused adjust_cr() functionRussell King
adjust_cr() is not used anymore, so let's get rid of it. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-02ARM: move "noalign" command line option to alignment.cRussell King
Keep all bits of alignment handling together. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-02ARM: provide common method to clear bits in CPU control registerRussell King
Several places open-code this manipulation, let's consolidate this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-01ARM: 8025/1: Get rid of meminfoLaura Abbott
memblock is now fully integrated into the kernel and is the prefered method for tracking memory. Rather than reinvent the wheel with meminfo, migrate to using memblock directly instead of meminfo as an intermediate. Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Kukjin Kim <kgene.kim@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-05-25ARM: 8055/1: cacheflush: use -st dsb option for ensuring completionWill Deacon
dsb st can be used to ensure completion of pending cache maintenance operations, so use it for the v7 cache maintenance operations. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-23ARM: 8031/2: change fixmap mapping region to support 32 CPUsLiu Hua
In 32-bit ARM systems, the fixmap mapping region can support no more than 14 CPUs(total: 896k; one CPU: 64K). And we can configure NR_CPUS up to 32. So there is a mismatch. This patch moves fixmapping region downwards to region 0xffc00000- 0xffe00000. Then the fixmap mapping region can support up to 32 CPUs. Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Liu Hua <sdu.liu@huawei.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04Merge branches 'amba', 'fixes', 'misc', 'mmci', 'unstable/omap-dma' and ↵Russell King
'unstable/sa11x0' into for-next
2014-02-10ARM: 7954/1: mm: remove remaining domain support from ARMv6Will Deacon
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7950/1: mm: Fix stage-2 device memory attributesChristoffer Dall
The stage-2 memory attributes are distinct from the Hyp memory attributes and the Stage-1 memory attributes. We were using the stage-1 memory attributes for stage-2 mappings causing device mappings to be mapped as normal memory. Add the S2 equivalent defines for memory attributes and fix the comments explaining the defines while at it. Add a prot_pte_s2 field to the mem_type struct and fill out the field for device mappings accordingly. Cc: <stable@vger.kernel.org> [3.9+] Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-11ARM: fix executability of CMA mappingsRussell King
The CMA region was being marked executable: 0xdc04e000-0xdc050000 8K RW x MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW x MEM/CACHED/WBRA 0xdc4f5000-0xdc500000 44K RW x MEM/CACHED/WBRA 0xdcce9000-0xe0000000 52316K RW x MEM/CACHED/WBRA This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but there are also a few other places in dma-mapping which should be corrected to use the right constant. Fix all these places: 0xdc04e000-0xdc050000 8K RW NX MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW NX MEM/CACHED/WBRA 0xdc280000-0xdc300000 512K RW NX MEM/CACHED/WBRA 0xdc6fc000-0xe0000000 58384K RW NX MEM/CACHED/WBRA Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-11ARM: mm: Define set_memory_* functions for ARMLaura Abbott
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-11ARM: implement basic NX support for kernel lowmem mappingsRussell King
Add basic NX support for kernel lowmem mappings. We mark any section which does not overlap kernel text as non-executable, preventing it from being used to write code and then execute directly from there. This does not change the alignment of the sections, so the kernel image doesn't grow significantly via this change, so we can do this without needing a config option. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-12-11ARM: add permission annotations to MT_MEMORY* mapping typesRussell King
Document the permissions which the various MT_MEMORY* mapping types will provide. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-11-14ARM: 7884/1: mm: Fix ECC mem policy printkMichal Simek
ECC policy can be applied to the whole system when this bit is implemented by SoC vendor (IMP - bit 9 - in L1 page table entry format). When this bit is not implemented by SoC vendor it doesn't mean that system has no other way how to do ECC. This patch ensures to show this message only when ECC is requested via cmd line ecc=on and runs on appropriate ARM core. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-10ARM: mm: Recreate kernel mappings in early_paging_init()Santosh Shilimkar
This patch adds a step in the init sequence, in order to recreate the kernel code/data page table mappings prior to full paging initialization. This is necessary on LPAE systems that run out of a physical address space outside the 4G limit. On these systems, this implementation provides a machine descriptor hook that allows the PHYS_OFFSET to be overridden in a machine specific fashion. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: R Sricharan <r.sricharan@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
2013-09-05Merge branches 'debug-choice', 'devel-stable' and 'misc' into for-linusRussell King
2013-08-01Merge branch 'security-fixes' into fixesRussell King
2013-08-01ARM: make vectors page inaccessible from userspaceRussell King
If kuser helpers are not provided by the kernel, disable user access to the vectors page. With the kuser helpers gone, there is no reason for this page to be visible to userspace. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-31ARM: move vector stubsRussell King
Move the machine vector stubs into the page above the vector page, which we can prevent from being visible to userspace. Also move the reset stub, and place the swi vector at a location that the 'ldr' can get to it. This hides pointers into the kernel which could give valuable information to attackers, and reduces the number of exploitable instructions at a fixed address. Cc: <stable@vger.kernel.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-26ARM: constify machine_desc structure usesRussell King
struct machine_desc records are defined everywhere as a 'const' structure, but unfortuantely it loses its const-ness through the use of linker magic - the symbols which surround the section are not declared const so it becomes possible not to use 'const' for pointers to these const structures. Let's fix this oversight - all pointers to these structures should be marked const too. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-22ARM: 7785/1: mm: restrict early_alloc to section-aligned memoryRussell King
When map_lowmem() runs, and processes a memory bank whose start or end is not section-aligned, memory must be allocated to store the 2nd-level page tables. Those allocations are made by calling memblock_alloc(). At this point, the only memory that is free *and* mapped is memory which has already been mapped by map_lowmem() itself. For this reason, we must calculate the first point at which map_lowmem() will need to allocate memory, and set the memblock allocation limit to a lower address, so that memblock_alloc() is guaranteed to return memory that is already mapped. This patch enhances sanity_check_meminfo() to calculate that memory address, and pass it to memblock_set_current_limit(), rather than just assuming the limit is arm_lowmem_limit. The algorithm applied is: * Default memblock_limit to arm_lowmem_limit in the absence of any other limit; arm_lowmem_limit is the highest memory that is mapped by map_lowmem(). * While walking the list of memblocks, if the start of a block is not aligned, 2nd-level page tables will need to be allocated to map the first few pages of the block. Hence, the memblock_limit must be before the start of the block. * Similarly, if the end of any block is not aligned, 2nd-level page tables will need to be allocated to map the last few pages of the block. Hence, the memblock_limit must point at the end of the block, rounded down to section-alignment. * The memory blocks are assumed to be sorted in address order, so the first unaligned block start or end is used to set the limit. With this algorithm, the start or end of almost any bank can be non- section-aligned. The only exception is that the start of bank 0 must be section-aligned, since otherwise memory would need to be allocated when mapping the start of bank 0, which occurs before any free memory is mapped. [swarren, wrote commit description, rewrote calculation of memblock_limit] Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-13Merge branch 'fixes' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds
Pull ARM fixes from Russell King: "A few fixes for ARM, mostly just one liners with the exception of the missing section specification. We decided not to rely on .previous to fix this but to explicitly state the section we want the code to be in." * 'fixes' of git://git.linaro.org/people/rmk/linux-arm: ARM: 7778/1: smp_twd: twd_update_frequency need be run on all online CPUs ARM: 7782/1: Kconfig: Let ARM_ERRATA_364296 not depend on CONFIG_SMP ARM: mm: fix boot on SA1110 Assabet ARM: 7781/1: mmu: Add debug_ll_io_init() mappings to early mappings ARM: 7780/1: add missing linker section markup to head-common.S
2013-07-09ARM: 7781/1: mmu: Add debug_ll_io_init() mappings to early mappingsStephen Boyd
Failure to add the mapping created in debug_ll_io_init() can lead to the BUG_ON() triggering in lib/ioremap.c:27 if the static virtual address decided for the debug_ll mapping overlaps with another mapping that is created later. This happens because the generic ioremap code has no idea there is a mapping there and it tries to place a mapping in the same location and blows up when it sees that there is a pte already present. kernel BUG at lib/ioremap.c:27! Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-rc2-00042-g2af0c67-dirty #316 task: ef088000 ti: ef082000 task.ti: ef082000 PC is at ioremap_page_range+0x16c/0x198 LR is at ioremap_page_range+0xf0/0x198 pc : [<c04cb874>] lr : [<c04cb7f8>] psr: 20000113 sp : ef083e78 ip : af140000 fp : ef083ebc r10: ef7fc100 r9 : ef7fc104 r8 : 000af174 r7 : 00000647 r6 : beffffff r5 : f004c000 r4 : f0040000 r3 : af173417 r2 : 16440653 r1 : af173e07 r0 : ef7fc8fc Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment kernel Control: 10c5787d Table: 8020406a DAC: 00000015 Process swapper/0 (pid: 1, stack limit = 0xef082238) Stack: (0xef083e78 to 0xef084000) 3e60: 00040000 ef083eec 3e80: bf134000 f004bfff c0207c00 f004c000 c02fc120 f000c000 c15e7800 00040000 3ea0: ef083eec 00000647 c098ba9c c0953544 ef083edc ef083ec0 c021b82c c04cb714 3ec0: c09cdc50 00000040 ef0f1e00 ef1003c0 ef083f14 ef083ee0 c09535bc c021b7bc 3ee0: c0953544 c04d0c6c c094e2cc c1600be4 c07440c4 c09a6888 00000002 c0a15f00 3f00: ef082000 00000000 ef083f54 ef083f18 c0208728 c0953550 00000002 c1600bfc 3f20: c08e3fac c0839918 ef083f54 c1600b80 c09a6888 c0a15f00 0000008b c094e2cc 3f40: c098ba9c c098bab8 ef083f94 ef083f58 c094ea0c c020865c 00000002 00000002 3f60: c094e2cc 00000000 c025b674 00000000 c06ff860 00000000 00000000 00000000 3f80: 00000000 00000000 ef083fac ef083f98 c06ff878 c094e910 00000000 00000000 3fa0: 00000000 ef083fb0 c020efe8 c06ff86c 00000000 00000000 00000000 00000000 3fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 3fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 c0595108 [<c04cb874>] (ioremap_page_range+0x16c/0x198) from [<c021b82c>] (__alloc_remap_buffer.isra.18+0x7c/0xc4) [<c021b82c>] (__alloc_remap_buffer.isra.18+0x7c/0xc4) from [<c09535bc>] (atomic_pool_init+0x78/0x128) [<c09535bc>] (atomic_pool_init+0x78/0x128) from [<c0208728>] (do_one_initcall+0xd8/0x198) [<c0208728>] (do_one_initcall+0xd8/0x198) from [<c094ea0c>] (kernel_init_freeable+0x108/0x1d0) [<c094ea0c>] (kernel_init_freeable+0x108/0x1d0) from [<c06ff878>] (kernel_init+0x18/0xf4) [<c06ff878>] (kernel_init+0x18/0xf4) from [<c020efe8>] (ret_from_fork+0x14/0x20) Code: e50b0040 ebf54b2f e51b0040 eaffffee (e7f001f2) Fix it by telling generic layers about the static mapping via iotable_init(). This also has the nice side effect of letting you see the mapping in procfs' vmallocinfo file. Cc: Rob Herring <rob.herring@calxeda.com> Cc: Stephen Warren <swarren@nvidia.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-03Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: "This contains the usual updates from other people (listed below) and the usual random muddle of miscellaneous ARM updates which cover some low priority bug fixes and performance improvements. I've started to put the pull request wording into the merge commits, which are: - NoMMU stuff: This includes the following series sent earlier to the list: - nommu-fixes - R7 Support - MPU support I've left out the ARCH_MULTIPLATFORM/!MMU stuff that Arnd and I were discussing today until we've reached a conclusion/that's had some more review. This is rebased (and re-tested) on your devel-stable branch because otherwise there were going to be conflicts with Uwe's V7M work now that you've merged that. I've included the fix for limiting MPU to CPU_V7. - Huge page support These changes bring both HugeTLB support and Transparent HugePage (THP) support to ARM. Only long descriptors (LPAE) are supported in this series. The code has been tested on an Arndale board (Exynos 5250). - LPAE updates Please pull these miscellaneous LPAE fixes I've been collecting for a while now for 3.11. They've been tested and reviewed by quite a few people, and most of the patches are pretty trivial. -- Will Deacon. - arch_timer cleanups Please pull these arch_timer cleanups I've been holding onto for a while. They're the same as my last posting, but have been rebased to v3.10-rc3. - mpidr linearisation (multiprocessor id register - identifies which CPU number we are in the system) This patch series that implements MPIDR linearization through a simple hashing algorithm and updates current cpu_{suspend}/{resume} code to use the newly created hash structures to retrieve context pointers. It represents a stepping stone for the implementation of power management code on forthcoming multi-cluster ARM systems. It has been tested on TC2 (dual cluster A15xA7 system), iMX6q, OMAP4 and Tegra, with processors hitting low-power states requiring warm-boot resume through the cpu_resume code path" * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (77 commits) ARM: 7775/1: mm: Remove do_sect_fault from LPAE code ARM: 7777/1: Avoid extra calls to the C compiler ARM: 7774/1: Fix dtb dependency to use order-only prerequisites ARM: 7770/1: remove residual ARMv2 support from decompressor ARM: 7769/1: Cortex-A15: fix erratum 798181 implementation ARM: 7768/1: prevent risks of out-of-bound access in ASID allocator ARM: 7767/1: let the ASID allocator handle suspended animation ARM: 7766/1: versatile: don't mark pen as __INIT ARM: 7765/1: perf: Record the user-mode PC in the call chain. ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork ARM: kernel: implement stack pointer save array through MPIDR hashing ARM: kernel: build MPIDR hash function data structure ARM: mpu: Ensure that MPU depends on CPU_V7 ARM: mpu: protect the vectors page with an MPU region ARM: mpu: Allow enabling of the MPU via kconfig ARM: 7758/1: introduce config HAS_BANDGAP ARM: 7757/1: mm: don't flush icache in switch_mm with hardware broadcasting ARM: 7751/1: zImage: don't overwrite ourself with a page table ARM: 7749/1: spinlock: retry trylock operation if strex fails on free lock ARM: 7748/1: oabi: handle faults when loading swi instruction from userspace ...
2013-07-02Merge tag 'cleanup-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull ARM SoC cleanups from Arnd Bergmann: "This contains cleanups as preparation for other branches adding new features, we pulled 16 branches for 9 platforms into this one. Most notable here is the removal of support for ATAGS based OMAP4 systems. Since all OMAP4 machines are fully functional with DT based booting in 3.10, we can remove a lot of code here. Also noteworthy is Maxime Ripard's cleanup of the machine descriptors, which means we need no machine descriptors in a lot more cases and can boot additional machines by just having the respective device drivers enabled." * tag 'cleanup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (76 commits) ARM: picoxcell: remove .nr_irqs reference ARM: s5p64x0: avoid build warning for uncompress.h ARM: SAMSUNG: Remove unused plat/regs-watchdog.h header ARM: SAMSUNG: Remove legacy watchdog reset code ARM: SAMSUNG: Let platforms use the new watchdog reset driver ARM: SAMSUNG: Add watchdog reset driver ARM: SAMSUNG: Use local definitions of watchdog registers watchdog: s3c2410_wdt: Use local register definitions ARM: S5P64X0: Use common uncompress.h part for plat-samsung ARM: SAMSUNG: Consolidate uncompress subroutine ARM: at91: drop rm9200dk board support ARM: dts: msm: Fix merge resolution ARM: OMAP1: Remove dma.h ARM: OMAP1: Remove legacy irda.h and irda setup from board files ARM: OMAP1: Remove duplicated DMA channel definitions ARM: OMAP1: Remove McBSP DMA channel definitions ARM: OMAP2+: Remove dma.h ARM: OMAP2+: hwmod: Remove remaining DMA channel definitions ARM: OMAP2+: Remove duplicated DMA channel definitions ARM: OMAP2+: Remove AES crypto device DMA channel definitions ...
2013-06-29Merge branch 'devel-stable' into for-nextRussell King
Conflicts: arch/arm/Makefile arch/arm/include/asm/glue-proc.h
2013-06-17ARM: 7753/1: map_init_section flushes incorrect pmdPo-Yu Chuang
This bug was introduced in commit e651eab0. Some v4/v5 platforms failed to boot due to this. Signed-off-by: Po-Yu Chuang <ratbert.chuang@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-05-30ARM: mm: clean up membank size limit checksCyril Chemparathy
This patch cleans up the highmem sanity check code by simplifying the range checks with a pre-calculated size_limit. This patch should otherwise have no functional impact on behavior. This patch also removes a redundant (bank->start < vmalloc_limit) check, since this is already covered by the !highmem condition. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30ARM: mm: cleanup checks for membank overlap with vmalloc areaCyril Chemparathy
On Keystone platforms, physical memory is entirely outside the 32-bit addressible range. Therefore, the (bank->start > ULONG_MAX) check below marks the entire system memory as highmem, and this causes unpleasentness all over. This patch eliminates the extra bank start check (against ULONG_MAX) by checking bank->start against the physical address corresponding to vmalloc_min instead. In the process, this patch also cleans up parts of the highmem sanity check code by removing what has now become a redundant check for banks that entirely overlap with the vmalloc range. Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30ARM: mm: use physical addresses in highmem sanity checksCyril Chemparathy
This patch modifies the highmem sanity checking code to use physical addresses instead. This change eliminates the wrap-around problems associated with the original virtual address based checks, and this simplifies the code a bit. The one constraint imposed here is that low physical memory must be mapped in a monotonically increasing fashion if there are multiple banks of memory, i.e., x < y must => pa(x) < pa(y). Signed-off-by: Cyril Chemparathy <cyril@ti.com> Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-30ARM: LPAE: use phys_addr_t in alloc_init_pud()Vitaly Andrianov
This patch fixes the alloc_init_pud() function to use phys_addr_t instead of unsigned long when passing in the phys argument. This is an extension to commit 97092e0c56830457af0639f6bd904537a150ea4a (ARM: pgtable: use phys_addr_t for physical addresses), which applied similar changes elsewhere in the ARM memory management code. Signed-off-by: Vitaly Andrianov <vitalya@ti.com> Signed-off-by: Cyril Chemparathy <cyril@ti.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Subash Patel <subash.rp@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-05-24ARM: mmu: Call debug_ll_io_init if no map_io function is specifiedMaxime Ripard
More and more sub-architectures are using only the debug_ll_io_init function as the map_io function. Make the core code call this function if no function is specified in the machine description to remove some boilerplate code. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Arnd Bergmann <arnd@arndb.de>
2013-05-02Merge branches 'devel-stable', 'entry', 'fixes', 'mach-types', 'misc' and ↵Russell King
'smp-hotplug' into for-linus
2013-04-17ARM: 7694/1: ARM, TCM: initialize TCM in paging_init(), instead of setup_arch()Joonsoo Kim
tcm_init() call iotable_init() and it use early_alloc variants which do memblock allocation. Directly using memblock allocation after initializing bootmem should not permitted, because bootmem can't know where are additinally reserved. So move tcm_init() to a safe place before initalizing bootmem. (On the U300) Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-22ARM: 7677/1: LPAE: Fix mapping in alloc_init_section for unaligned addressesSricharan R
With LPAE enabled, alloc_init_section() does not map the entire address space for unaligned addresses. The issue also reproduced with CMA + LPAE. CMA tries to map 16MB with page granularity mappings during boot. alloc_init_pte() is called and out of 16MB, only 2MB gets mapped and rest remains unaccessible. Because of this OMAP5 boot is broken with CMA + LPAE enabled. Fix the issue by ensuring that the entire addresses are mapped. Signed-off-by: R Sricharan <r.sricharan@ti.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <chris@cloudcar.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Christoffer Dall <chris@cloudcar.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>