summaryrefslogtreecommitdiff
path: root/arch
diff options
context:
space:
mode:
authorSebastian Siewior <bigeasy@linutronix.de>2016-03-08 10:03:56 +0100
committerSasha Levin <sasha.levin@oracle.com>2016-07-10 23:06:57 -0400
commit7e69c7b26e03e6f573d8613326c94f7ddc5935b8 (patch)
treeeba6c14aaf63e10879969049953bb18c15a45ef7 /arch
parent39bf5645d861cb5dc7913b183fecfc6fad033b4e (diff)
powerpc/mm: Fixup preempt underflow with huge pages
[ Upstream commit 08a5bb2921e490939f78f38fd0d02858bb709942 ] hugepd_free() used __get_cpu_var() once. Nothing ensured that the code accessing the variable did not migrate from one CPU to another and soon this was noticed by Tiejun Chen in 94b09d755462 ("powerpc/hugetlb: Replace __get_cpu_var with get_cpu_var"). So we had it fixed. Christoph Lameter was doing his __get_cpu_var() replaces and forgot PowerPC. Then he noticed this and sent his fixed up batch again which got applied as 69111bac42f5 ("powerpc: Replace __get_cpu_var uses"). The careful reader will noticed one little detail: get_cpu_var() got replaced with this_cpu_ptr(). So now we have a put_cpu_var() which does a preempt_enable() and nothing that does preempt_disable() so we underflow the preempt counter. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christoph Lameter <cl@linux.com> Cc: stable@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Diffstat (limited to 'arch')
-rw-r--r--arch/powerpc/mm/hugetlbpage.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 3385e3d0506e..4d29154a4987 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -472,13 +472,13 @@ static void hugepd_free(struct mmu_gather *tlb, void *hugepte)
{
struct hugepd_freelist **batchp;
- batchp = this_cpu_ptr(&hugepd_freelist_cur);
+ batchp = &get_cpu_var(hugepd_freelist_cur);
if (atomic_read(&tlb->mm->mm_users) < 2 ||
cpumask_equal(mm_cpumask(tlb->mm),
cpumask_of(smp_processor_id()))) {
kmem_cache_free(hugepte_cache, hugepte);
- put_cpu_var(hugepd_freelist_cur);
+ put_cpu_var(hugepd_freelist_cur);
return;
}