summaryrefslogtreecommitdiff
path: root/arch
diff options
context:
space:
mode:
authorRoland Dreier <rdreier@cisco.com>2008-09-15 10:43:35 +0000
committerBenjamin Herrenschmidt <benh@kernel.crashing.org>2008-10-07 14:26:18 +1100
commita880e7623397bcb44877b012cd65baa11ad1bbf8 (patch)
treea413d15fbb5145feafed14bfa7e9ed433cf4603f /arch
parent6ddc9d3200c25edddd7051f208dbbdd8e16f0734 (diff)
powerpc: Avoid integer overflow in page_is_ram()
Commit 8b150478 ("ppc: make phys_mem_access_prot() work with pfns instead of addresses") fixed page_is_ram() in arch/ppc to avoid overflow for addresses above 4G on 32-bit kernels. However arch/powerpc's page_is_ram() is missing the same fix -- it computes a physical address by doing pfn << PAGE_SHIFT, which overflows if pfn corresponds to a page above 4G. In particular this causes pages above 4G to be mapped with the wrong caching attribute; for example many ppc440-based SoCs have PCI space above 4G, and mmap()ing MMIO space may end up with a mapping that has caching enabled. Fix this by working with the pfn and avoiding the conversion to physical address that causes the overflow. This patch compares the pfn to max_pfn, which is a semantic change from the old code -- that code compared the physical address to high_memory, which corresponds to max_low_pfn. However, I think that was is another bug, since highmem pages are still RAM. Reported-by: vb <vb@vsbe.com> Signed-off-by: Roland Dreier <rolandd@cisco.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'arch')
-rw-r--r--arch/powerpc/mm/mem.c5
1 files changed, 2 insertions, 3 deletions
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 1c93c255873b..98d7bf99533a 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -75,11 +75,10 @@ static inline pte_t *virt_to_kpte(unsigned long vaddr)
int page_is_ram(unsigned long pfn)
{
- unsigned long paddr = (pfn << PAGE_SHIFT);
-
#ifndef CONFIG_PPC64 /* XXX for now */
- return paddr < __pa(high_memory);
+ return pfn < max_pfn;
#else
+ unsigned long paddr = (pfn << PAGE_SHIFT);
int i;
for (i=0; i < lmb.memory.cnt; i++) {
unsigned long base;