summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel/cpu_errata.c
AgeCommit message (Collapse)Author
2020-04-24arm64: cpu_errata: include required headersArnd Bergmann
commit 94a5d8790e79ab78f499d2d9f1ff2cab63849d9f upstream. Without including psci.h and arm-smccc.h, we now get a build failure in some configurations: arch/arm64/kernel/cpu_errata.c: In function 'arm64_update_smccc_conduit': arch/arm64/kernel/cpu_errata.c:278:10: error: 'psci_ops' undeclared (first use in this function); did you mean 'sysfs_ops'? arch/arm64/kernel/cpu_errata.c: In function 'arm64_set_ssbd_mitigation': arch/arm64/kernel/cpu_errata.c:311:3: error: implicit declaration of function 'arm_smccc_1_1_hvc' [-Werror=implicit-function-declaration] arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_2, state, NULL); Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-15arm64: Handle mismatched cache typeSuzuki K Poulose
commit 314d53d297980676011e6fd83dac60db4a01dc70 upstream. Track mismatches in the cache type register (CTR_EL0), other than the D/I min line sizes and trap user accesses if there are any. Fixes: be68a8aaf925 ("arm64: cpufeature: Fix CTR_EL0 field definitions") Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-09-15arm64: Fix mismatched cache line size detectionSuzuki K Poulose
commit 4c4a39dd5fe2d13e2d2fa5fceb8ef95d19fc389a upstream. If there is a mismatch in the I/D min line size, we must always use the system wide safe value both in applications and in the kernel, while performing cache operations. However, we have been checking more bits than just the min line sizes, which triggers false negatives. We may need to trap the user accesses in such cases, but not necessarily patch the kernel. This patch fixes the check to do the right thing as advertised. A new capability will be added to check mismatches in other fields and ensure we trap the CTR accesses. Fixes: be68a8aaf925 ("arm64: cpufeature: Fix CTR_EL0 field definitions") Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: ssbd: Restore mitigation status on CPU resumeMarc Zyngier
commit 647d0519b53f440a55df163de21c52a8205431cc upstream. On a system where firmware can dynamically change the state of the mitigation, the CPU will always come up with the mitigation enabled, including when coming back from suspend. If the user has requested "no mitigation" via a command line option, let's enforce it by calling into the firmware again to disable it. Similarily, for a resume from hibernate, the mitigation could have been disabled by the boot kernel. Let's ensure that it is set back on in that case. Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: ssbd: Skip apply_ssbd if not using dynamic mitigationMarc Zyngier
commit 986372c4367f46b34a3c0f6918d7fb95cbdf39d6 upstream. In order to avoid checking arm64_ssbd_callback_required on each kernel entry/exit even if no mitigation is required, let's add yet another alternative that by default jumps over the mitigation, and that gets nop'ed out if we're doing dynamic mitigation. Think of it as a poor man's static key... Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: Add 'ssbd' command-line optionMarc Zyngier
commit a43ae4dfe56a01f5b98ba0cb2f784b6a43bafcc6 upstream. On a system where the firmware implements ARCH_WORKAROUND_2, it may be useful to either permanently enable or disable the workaround for cases where the user decides that they'd rather not get a trap overhead, and keep the mitigation permanently on or off instead of switching it on exception entry/exit. In any case, default to the mitigation being enabled. Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: Add ARCH_WORKAROUND_2 probingMarc Zyngier
commit a725e3dda1813ed306734823ac4c65ca04e38500 upstream. As for Spectre variant-2, we rely on SMCCC 1.1 to provide the discovery mechanism for detecting the SSBD mitigation. A new capability is also allocated for that purpose, and a config option. Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: Add per-cpu infrastructure to call ARCH_WORKAROUND_2Marc Zyngier
commit 5cf9ce6e5ea50f805c6188c04ed0daaec7b6887d upstream. In a heterogeneous system, we can end up with both affected and unaffected CPUs. Let's check their status before calling into the firmware. Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-07-22arm64: Call ARCH_WORKAROUND_2 on transitions between EL0 and EL1Marc Zyngier
commit 8e2906245f1e3b0d027169d9f2e55ce0548cb96e upstream. In order for the kernel to protect itself, let's call the SSBD mitigation implemented by the higher exception level (either hypervisor or firmware) on each transition between userspace and kernel. We must take the PSCI conduit into account in order to target the right exception level, hence the introduction of a runtime patching callback. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-05-30arm64: Relax ARM_SMCCC_ARCH_WORKAROUND_1 discoveryMarc Zyngier
[ Upstream commit e21da1c992007594d391e7b301779cf30f438691 ] A recent update to the ARM SMCCC ARCH_WORKAROUND_1 specification allows firmware to return a non zero, positive value to describe that although the mitigation is implemented at the higher exception level, the CPU on which the call is made is not affected. Let's relax the check on the return value from ARCH_WORKAROUND_1 so that we only error out if the returned value is negative. Fixes: b092201e0020 ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support") Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: Kill PSCI_GET_VERSION as a variant-2 workaroundMark Rutland
From: Marc Zyngier <marc.zyngier@arm.com> commit 3a0a397ff5ff8b56ca9f7908b75dee6bf0b5fabb upstream. Now that we've standardised on SMCCC v1.1 to perform the branch prediction invalidation, let's drop the previous band-aid. If vendors haven't updated their firmware to do SMCCC 1.1, they haven't updated PSCI either, so we don't loose anything. Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening supportMark Rutland
From: Marc Zyngier <marc.zyngier@arm.com> commit b092201e0020614127f495c092e0a12d26a2116e upstream. Add the detection and runtime code for ARM_SMCCC_ARCH_WORKAROUND_1. It is lovely. Really. Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: Branch predictor hardening for Cavium ThunderX2Mark Rutland
From: Jayachandran C <jnair@caviumnetworks.com> commit f3d795d9b360523beca6d13ba64c2c532f601149 upstream. Use PSCI based mitigation for speculative execution attacks targeting the branch predictor. We use the same mechanism as the one used for Cortex-A CPUs, we expect the PSCI version call to have a side effect of clearing the BTBs. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jayachandran C <jnair@caviumnetworks.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: Implement branch predictor hardening for affected Cortex-A CPUsMark Rutland
From: Will Deacon <will.deacon@arm.com> commit aa6acde65e03186b5add8151e1ffe36c3c62639b upstream. Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing and can theoretically be attacked by malicious code. This patch implements a PSCI-based mitigation for these CPUs when available. The call into firmware will invalidate the branch predictor state, preventing any malicious entries from affecting other victim contexts. Co-developed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: cpu_errata: Allow an erratum to be match for all revisions of a coreMark Rutland
From: Marc Zyngier <marc.zyngier@arm.com> commit 06f1494f837da8997d670a1ba87add7963b08922 upstream. Some minor erratum may not be fixed in further revisions of a core, leading to a situation where the workaround needs to be updated each time an updated core is released. Introduce a MIDR_ALL_VERSIONS match helper that will work for all versions of that MIDR, once and for all. Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: Add skeleton to harden the branch predictor against aliasing attacksMark Rutland
From: Will Deacon <will.deacon@arm.com> commit 0f15adbb2861ce6f75ccfc5a92b19eae0ef327d0 upstream. Aliasing attacks against CPU branch predictors can allow an attacker to redirect speculative control flow on some CPUs and potentially divulge information from one context to another. This patch adds initial skeleton code behind a new Kconfig option to enable implementation-specific mitigations against these attacks for CPUs that are affected. Co-developed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [v4.9: copy bp hardening cb via text mapping] Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-20arm64: Run enable method for errata work arounds on late CPUsMark Rutland
From: Suzuki K Poulose <suzuki.poulose@arm.com> commit 55b35d070c2534dfb714b883f3c3ae05d02032da upstream. When a CPU is brought up after we have finalised the system wide capabilities (i.e, features and errata), we make sure the new CPU doesn't need a new errata work around which has not been detected already. However we don't run enable() method on the new CPU for the errata work arounds already detected. This could cause the new CPU running without potential work arounds. It is upto the "enable()" method to decide if this CPU should do something about the errata. Fixes: commit 6a6efbb45b7d95c84 ("arm64: Verify CPU errata work arounds on hotplugged CPU") Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [v4.9 backport] Tested-by: Greg Hackmann <ghackmann@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-10-20arm64: cpufeature: Schedule enable() calls instead of calling them via IPIJames Morse
The enable() call for a cpufeature/errata is called using on_each_cpu(). This issues a cross-call IPI to get the work done. Implicitly, this stashes the running PSTATE in SPSR when the CPU receives the IPI, and restores it when we return. This means an enable() call can never modify PSTATE. To allow PAN to do this, change the on_each_cpu() call to use stop_machine(). This schedules the work on each CPU which allows us to modify PSTATE. This involves changing the protype of all the enable() functions. enable_cpu_capabilities() is called during boot and enables the feature on all online CPUs. This path now uses stop_machine(). CPU features for hotplug'd CPUs are enabled by verify_local_cpu_features() which only acts on the local CPU, and can already modify the running PSTATE as it is called from secondary_start_kernel(). Reported-by: Tony Thompson <anthony.thompson@arm.com> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Work around systems with mismatched cache line sizesSuzuki K Poulose
Systems with differing CPU i-cache/d-cache line sizes can cause problems with the cache management by software when the execution is migrated from one to another. Usually, the application reads the cache size on a CPU and then uses that length to perform cache operations. However, if it gets migrated to another CPU with a smaller cache line size, things could go completely wrong. To prevent such cases, always use the smallest cache line size among the CPUs. The kernel CPU feature infrastructure already keeps track of the safe value for all CPUID registers including CTR. This patch works around the problem by : For kernel, dynamically patch the kernel to read the cache size from the system wide copy of CTR_EL0. For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT) and emulate the mrs instruction to return the system wide safe value of CTR_EL0. For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0 via read_system_reg), we keep track of the pointer to table entry for CTR_EL0 in the CPU feature infrastructure. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Use consistent naming for errata handlingSuzuki K Poulose
This is a cosmetic change to rename the functions dealing with the errata work arounds to be more consistent with their naming. 1) check_local_cpu_errata() => update_cpu_errata_workarounds() check_local_cpu_errata() actually updates the system's errata work arounds. So rename it to reflect the same. 2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds() Use errata_workarounds instead of _errata. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-07-27Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - Kexec support for arm64 - Kprobes support - Expose MIDR_EL1 and REVIDR_EL1 CPU identification registers to sysfs - Trapping of user space cache maintenance operations and emulation in the kernel (CPU errata workaround) - Clean-up of the early page tables creation (kernel linear mapping, EFI run-time maps) to avoid splitting larger blocks (e.g. pmds) into smaller ones (e.g. ptes) - VDSO support for CLOCK_MONOTONIC_RAW in clock_gettime() - ARCH_HAS_KCOV enabled for arm64 - Optimise IP checksum helpers - SWIOTLB optimisation to only allocate/initialise the buffer if the available RAM is beyond the 32-bit mask - Properly handle the "nosmp" command line argument - Fix for the initialisation of the CPU debug state during early boot - vdso-offsets.h build dependency workaround - Build fix when RANDOMIZE_BASE is enabled with MODULES off * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (64 commits) arm64: arm: Fix-up the removal of the arm64 regs_query_register_name() prototype arm64: Only select ARM64_MODULE_PLTS if MODULES=y arm64: mm: run pgtable_page_ctor() on non-swapper translation table pages arm64: mm: make create_mapping_late() non-allocating arm64: Honor nosmp kernel command line option arm64: Fix incorrect per-cpu usage for boot CPU arm64: kprobes: Add KASAN instrumentation around stack accesses arm64: kprobes: Cleanup jprobe_return arm64: kprobes: Fix overflow when saving stack arm64: kprobes: WARN if attempting to step with PSTATE.D=1 arm64: debug: remove unused local_dbg_{enable, disable} macros arm64: debug: remove redundant spsr manipulation arm64: debug: unmask PSTATE.D earlier arm64: localise Image objcopy flags arm64: ptrace: remove extra define for CPSR's E bit kprobes: Add arm64 case in kprobe example module arm64: Add kernel return probes support (kretprobes) arm64: Add trampoline code for kretprobes arm64: kprobes instruction simulation support arm64: Treat all entry code as non-kprobe-able ...
2016-07-07arm64: Enable workaround for Cavium erratum 27456 on thunderx-81xxGanapatrao Kulkarni
Cavium erratum 27456 commit 104a0c02e8b1 ("arm64: Add workaround for Cavium erratum 27456") is applicable for thunderx-81xx pass1.0 SoC as well. Adding code to enable to 81xx. Signed-off-by: Ganapatrao Kulkarni <gkulkarni@cavium.com> Reviewed-by: Andrew Pinski <apinski@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-07-01arm64: trap userspace "dc cvau" cache operation on errata-affected coreAndre Przywara
The ARM errata 819472, 826319, 827319 and 824069 for affected Cortex-A53 cores demand to promote "dc cvau" instructions to "dc civac". Since we allow userspace to also emit those instructions, we should make sure that "dc cvau" gets promoted there too. So lets grasp the nettle here and actually trap every userland cache maintenance instruction once we detect at least one affected core in the system. We then emulate the instruction by executing it on behalf of userland, promoting "dc cvau" to "dc civac" on the way and injecting access fault back into userspace. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [catalin.marinas@arm.com: s/set_segfault/arm64_notify_segfault/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-01arm64: errata: Calling enable functions for CPU errata tooAndre Przywara
Currently we call the (optional) enable function for CPU _features_ only. As CPU _errata_ descriptions share the same data structure and having an enable function is useful for errata as well (for instance to set bits in SCTLR), lets call it when enumerating erratas too. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-04-25arm64: Verify CPU errata work arounds on hotplugged CPUSuzuki K Poulose
CPU Errata work arounds are detected and applied to the kernel code at boot time and the data is then freed up. If a new hotplugged CPU requires a work around which was not applied at boot time, there is nothing we can do but simply fail the booting. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25arm64: cpufeature: Add scope for capability checkSuzuki K Poulose
Add scope parameter to the arm64_cpu_capabilities::matches(), so that this can be reused for checking the capability on a given CPU vs the system wide. The system uses the default scope associated with the capability for initialising the CPU_HWCAPs and ELF_HWCAPs. Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-02-26arm64: Add workaround for Cavium erratum 27456Andrew Pinski
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI instructions may cause the icache to become corrupted if it contains data for a non-current ASID. This patch implements the workaround (which invalidates the local icache when switching the mm) by using code patching. Signed-off-by: Andrew Pinski <apinski@cavium.com> Signed-off-by: David Daney <david.daney@cavium.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-16arm64: prefetch: add alternative pattern for CPUs without a prefetcherWill Deacon
Most CPUs have a hardware prefetcher which generally performs better without explicit prefetch instructions issued by software, however some CPUs (e.g. Cavium ThunderX) rely solely on explicit prefetch instructions. This patch adds an alternative pattern (ARM64_HAS_NO_HW_PREFETCH) to allow our library code to make use of explicit prefetch instructions during things like copy routines only when the CPU does not have the capability to perform the prefetching itself. Signed-off-by: Will Deacon <will.deacon@arm.com> Tested-by: Andrew Pinski <apinski@cavium.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-11-24arm64: KVM: Add workaround for Cortex-A57 erratum 834220Marc Zyngier
Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults when a Stage 1 permission fault or device alignment fault should have been reported. This patch implements the workaround (which is to validate that the Stage-1 translation actually succeeds) by using code patching. Cc: stable@vger.kernel.org Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-11-04Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - "genirq: Introduce generic irq migration for cpu hotunplugged" patch merged from tip/irq/for-arm to allow the arm64-specific part to be upstreamed via the arm64 tree - CPU feature detection reworked to cope with heterogeneous systems where CPUs may not have exactly the same features. The features reported by the kernel via internal data structures or ELF_HWCAP are delayed until all the CPUs are up (and before user space starts) - Support for 16KB pages, with the additional bonus of a 36-bit VA space, though the latter only depending on EXPERT - Implement native {relaxed, acquire, release} atomics for arm64 - New ASID allocation algorithm which avoids IPI on roll-over, together with TLB invalidation optimisations (using local vs global where feasible) - KASan support for arm64 - EFI_STUB clean-up and isolation for the kernel proper (required by KASan) - copy_{to,from,in}_user optimisations (sharing the memcpy template) - perf: moving arm64 to the arm32/64 shared PMU framework - L1_CACHE_BYTES increased to 128 to accommodate Cavium hardware - Support for the contiguous PTE hint on kernel mapping (16 consecutive entries may be able to use a single TLB entry) - Generic CONFIG_HZ now used on arm64 - defconfig updates * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (91 commits) arm64/efi: fix libstub build under CONFIG_MODVERSIONS ARM64: Enable multi-core scheduler support by default arm64/efi: move arm64 specific stub C code to libstub arm64: page-align sections for DEBUG_RODATA arm64: Fix build with CONFIG_ZONE_DMA=n arm64: Fix compat register mappings arm64: Increase the max granular size arm64: remove bogus TASK_SIZE_64 check arm64: make Timer Interrupt Frequency selectable arm64/mm: use PAGE_ALIGNED instead of IS_ALIGNED arm64: cachetype: fix definitions of ICACHEF_* flags arm64: cpufeature: declare enable_cpu_capabilities as static genirq: Make the cpuhotplug migration code less noisy arm64: Constify hwcap name string arrays arm64/kvm: Make use of the system wide safe values arm64/debug: Make use of the system wide safe value arm64: Move FP/ASIMD hwcap handling to common code arm64/HWCAP: Use system wide safe values arm64/capabilities: Make use of system wide safe value arm64: Delay cpu feature capability checks ...
2015-10-21arm64: Refactor check_cpu_capabilitiesSuzuki K. Poulose
check_cpu_capabilities runs through a given list of caps and checks if the system has the cap, updates the system capability bitmap and also runs any enable() methods associated with them. All of this is not quite obvious from the name 'check'. This patch splits the check_cpu_capabilities into two parts : 1) update_cpu_capabilities => Runs through the given list and updates the system wide capability map. 2) enable_cpu_capabilities => Runs through the given list and invokes enable() (if any) for the caps enabled on the system. Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Suggested-by: Catalin Marinas <catalin.marinsa@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-09-29irqchip/gicv3: Workaround for Cavium ThunderX erratum 23154Robert Richter
This patch implements Cavium ThunderX erratum 23154. The gicv3 of ThunderX requires a modified version for reading the IAR status to ensure data synchronization. Since this is in the fast-path and called with each interrupt, runtime patching is used using jump label patching for smallest overhead (no-op). This is the same technique as used for tracepoints. Signed-off-by: Robert Richter <rrichter@cavium.com> Reviewed-by: Marc Zygnier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Jason Cooper <jason@lakedaemon.net> Cc: Will Deacon <will.deacon@arm.com> Link: http://lkml.kernel.org/r/1442869119-1814-3-git-send-email-rric@kernel.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-04-01arm64: fix midr range for Cortex-A57 erratum 832075Bo Yan
Register MIDR_EL1 is masked to get variant and revision fields, then compared against midr_range_min and midr_range_max when checking whether CPU is affected by any particular erratum. However, variant and revision fields in MIDR_EL1 are separated by 16 bits, so the min and max of midr range should be constructed accordingly, otherwise the patch will not be applied when variant field is non-0. Cc: stable@vger.kernel.org # 3.19+ Acked-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Paul Walmsley <paul@pwsan.com> Signed-off-by: Bo Yan <byan@nvidia.com> [will: use MIDR_VARIANT_SHIFT to construct upper bound] Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-04-01arm64: errata: add workaround for cortex-a53 erratum #845719Will Deacon
When running a compat (AArch32) userspace on Cortex-A53, a load at EL0 from a virtual address that matches the bottom 32 bits of the virtual address used by a recent load at (AArch64) EL1 might return incorrect data. This patch works around the issue by writing to the contextidr_el1 register on the exception return path when returning to a 32-bit task. This workaround is patched in at runtime based on the MIDR value of the processor. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-30arm64: Extract feature parsing code from cpu_errata.cMarc Zyngier
As we detect more architectural features at runtime, it makes sense to reuse the existing framework whilst avoiding to call a feature an erratum... This patch extract the core capability parsing, moves it into a new file (cpufeature.c), and let the CPU errata detection code use it. Reviewed-by: Andre Przywara <andre.przywara@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: alternatives: fix pr_fmt string for consistencyWill Deacon
Consistently use the plural form for alternatives pr_fmt strings. Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: protect alternatives workarounds with Kconfig optionsAndre Przywara
Not all of the errata we have workarounds for apply necessarily to all SoCs, so people compiling a kernel for one very specific SoC may not need to patch the kernel. Introduce a new submenu in the "Platform selection" menu to allow people to turn off certain bugs if they are not affected. By default all of them are enabled. Normal users or distribution kernels shouldn't bother to deselect any bugs here, since the alternatives framework will take care of patching them in only if needed. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [will: moved kconfig menu under `Kernel Features'] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: add Cortex-A57 erratum 832075 workaroundAndre Przywara
The ARM erratum 832075 applies to certain revisions of Cortex-A57, one of the workarounds is to change device loads into using load-aquire semantics. This is achieved using the alternatives framework. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: add Cortex-A53 cache errata workaroundAndre Przywara
The ARM errata 819472, 826319, 827319 and 824069 define the same workaround for these hardware issues in certain Cortex-A53 parts. Use the new alternatives framework and the CPU MIDR detection to patch "cache clean" into "cache clean and invalidate" instructions if an affected CPU is detected at runtime. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [will: add __maybe_unused to squash gcc warning] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: detect silicon revisions and set cap bits accordinglyAndre Przywara
After each CPU has been started, we iterate through a list of CPU features or bugs to detect CPUs which need (or could benefit from) kernel code patches. For each feature/bug there is a function which checks if that particular CPU is affected. We will later provide some more generic functions for common things like testing for certain MIDR ranges. We do this for every CPU to cover big.LITTLE systems properly as well. If a certain feature/bug has been detected, the capability bit will be set, so that later the call to apply_alternatives() will trigger the actual code patching. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>