From a3dc327c1f5ccbe64d998f5e44926c967afd8a30 Mon Sep 17 00:00:00 2001 From: Paul Burton Date: Tue, 1 Mar 2016 02:37:57 +0000 Subject: MIPS: Flush highmem pages in __flush_dcache_page commit 234859e49a15323cf1b2331bdde7f658c4cb45fb upstream. When flush_dcache_page is called on an executable page, that page is about to be provided to userland & we can presume that the icache contains no valid entries for its address range. However if the icache does not fill from the dcache then we cannot presume that the pages content has been written back as far as the memories that the dcache will fill from (ie. L2 or further out). This was being done for lowmem pages, but not for highmem which can lead to icache corruption. Fix this by mapping highmem pages & flushing their content from the dcache in __flush_dcache_page before providing the page to userland, just as is done for lowmem pages. Signed-off-by: Paul Burton Cc: Lars Persson Cc: Andrew Morton Cc: Kirill A. Shutemov Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12720/ Signed-off-by: Ralf Baechle Signed-off-by: Greg Kroah-Hartman --- arch/mips/mm/cache.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) (limited to 'arch/mips/mm/cache.c') diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index aab218c36e0d..801016df8999 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -16,6 +16,7 @@ #include #include +#include #include #include #include @@ -83,8 +84,6 @@ void __flush_dcache_page(struct page *page) struct address_space *mapping = page_mapping(page); unsigned long addr; - if (PageHighMem(page)) - return; if (mapping && !mapping_mapped(mapping)) { SetPageDcacheDirty(page); return; @@ -95,8 +94,15 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - addr = (unsigned long) page_address(page); + if (PageHighMem(page)) + addr = (unsigned long)kmap_atomic(page); + else + addr = (unsigned long)page_address(page); + flush_data_cache_page(addr); + + if (PageHighMem(page)) + __kunmap_atomic((void *)addr); } EXPORT_SYMBOL(__flush_dcache_page); -- cgit v1.2.3 From a8c09ec300b6bdfae69d03fd047848f8d34421e8 Mon Sep 17 00:00:00 2001 From: Paul Burton Date: Tue, 1 Mar 2016 02:37:58 +0000 Subject: MIPS: Handle highmem pages in __update_cache commit f4281bba818105c7c91799abe40bc05c0dbdaa25 upstream. The following patch will expose __update_cache to highmem pages. Handle them by mapping them in for the duration of the cache maintenance, just like in __flush_dcache_page. The code for that isn't shared because we need the page address in __update_cache so sharing became messy. Given that the entirity is an extra 5 lines, just duplicate it. Signed-off-by: Paul Burton Cc: Lars Persson Cc: Andrew Morton Cc: Jerome Marchand Cc: Kirill A. Shutemov Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12721/ Signed-off-by: Ralf Baechle Signed-off-by: Greg Kroah-Hartman --- arch/mips/mm/cache.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) (limited to 'arch/mips/mm/cache.c') diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 801016df8999..fe93a51c80ef 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -149,9 +149,17 @@ void __update_cache(struct vm_area_struct *vma, unsigned long address, return; page = pfn_to_page(pfn); if (page_mapping(page) && Page_dcache_dirty(page)) { - addr = (unsigned long) page_address(page); + if (PageHighMem(page)) + addr = (unsigned long)kmap_atomic(page); + else + addr = (unsigned long)page_address(page); + if (exec || pages_do_alias(addr, address & PAGE_MASK)) flush_data_cache_page(addr); + + if (PageHighMem(page)) + __kunmap_atomic((void *)addr); + ClearPageDcacheDirty(page); } } -- cgit v1.2.3 From 6a0538541547a6513126f947b4417ff6ee8a9316 Mon Sep 17 00:00:00 2001 From: Paul Burton Date: Tue, 1 Mar 2016 02:37:59 +0000 Subject: MIPS: Sync icache & dcache in set_pte_at commit 37d22a0d798b5c938b277d32cfd86dc231381342 upstream. It's possible for pages to become visible prior to update_mmu_cache running if a thread within the same address space preempts the current thread or runs simultaneously on another CPU. That is, the following scenario is possible: CPU0 CPU1 write to page flush_dcache_page flush_icache_page set_pte_at map page update_mmu_cache If CPU1 maps the page in between CPU0's set_pte_at, which marks it valid & visible, and update_mmu_cache where the dcache flush occurs then CPU1s icache will fill from stale data (unless it fills from the dcache, in which case all is good, but most MIPS CPUs don't have this property). Commit 4d46a67a3eb8 ("MIPS: Fix race condition in lazy cache flushing.") attempted to fix that by performing the dcache flush in flush_icache_page such that it occurs before the set_pte_at call makes the page visible. However it has the problem that not all code that writes to pages exposed to userland call flush_icache_page. There are many callers of set_pte_at under mm/ and only 2 of them do call flush_icache_page. Thus the race window between a page becoming visible & being coherent between the icache & dcache remains open in some cases. To illustrate some of the cases, a WARN was added to __update_cache with this patch applied that triggered in cases where a page about to be flushed from the dcache was not the last page provided to flush_icache_page. That is, backtraces were obtained for cases in which the race window is left open without this patch. The 2 standout examples follow. When forking a process: [ 15.271842] [<80417630>] __update_cache+0xcc/0x188 [ 15.277274] [<80530394>] copy_page_range+0x56c/0x6ac [ 15.282861] [<8042936c>] copy_process.part.54+0xd40/0x17ac [ 15.289028] [<80429f80>] do_fork+0xe4/0x420 [ 15.293747] [<80413808>] handle_sys+0x128/0x14c When exec'ing an ELF binary: [ 14.445964] [<80417630>] __update_cache+0xcc/0x188 [ 14.451369] [<80538d88>] move_page_tables+0x414/0x498 [ 14.457075] [<8055d848>] setup_arg_pages+0x220/0x318 [ 14.462685] [<805b0f38>] load_elf_binary+0x530/0x12a0 [ 14.468374] [<8055ec3c>] search_binary_handler+0xbc/0x214 [ 14.474444] [<8055f6c0>] do_execveat_common+0x43c/0x67c [ 14.480324] [<8055f938>] do_execve+0x38/0x44 [ 14.485137] [<80413808>] handle_sys+0x128/0x14c These code paths write into a page, call flush_dcache_page then call set_pte_at without flush_icache_page inbetween. The end result is that the icache can become corrupted & userland processes may execute unexpected or invalid code, typically resulting in a reserved instruction exception, a trap or a segfault. Fix this race condition fully by performing any cache maintenance required to keep the icache & dcache in sync in set_pte_at, before the page is made valid. This has the added bonus of ensuring the cache maintenance always happens in one location, rather than being duplicated in flush_icache_page & update_mmu_cache. It also matches the way other architectures solve the same problem (see arm, ia64 & powerpc). Signed-off-by: Paul Burton Reported-by: Ionela Voinescu Cc: Lars Persson Fixes: 4d46a67a3eb8 ("MIPS: Fix race condition in lazy cache flushing.") Cc: Steven J. Hill Cc: David Daney Cc: Huacai Chen Cc: Aneesh Kumar K.V Cc: Andrew Morton Cc: Jerome Marchand Cc: Kirill A. Shutemov Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12722/ Signed-off-by: Ralf Baechle Signed-off-by: Greg Kroah-Hartman --- arch/mips/mm/cache.c | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-) (limited to 'arch/mips/mm/cache.c') diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index fe93a51c80ef..e87bccd6e0aa 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -125,30 +125,17 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr) EXPORT_SYMBOL(__flush_anon_page); -void __flush_icache_page(struct vm_area_struct *vma, struct page *page) -{ - unsigned long addr; - - if (PageHighMem(page)) - return; - - addr = (unsigned long) page_address(page); - flush_data_cache_page(addr); -} -EXPORT_SYMBOL_GPL(__flush_icache_page); - -void __update_cache(struct vm_area_struct *vma, unsigned long address, - pte_t pte) +void __update_cache(unsigned long address, pte_t pte) { struct page *page; unsigned long pfn, addr; - int exec = (vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc; + int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; page = pfn_to_page(pfn); - if (page_mapping(page) && Page_dcache_dirty(page)) { + if (Page_dcache_dirty(page)) { if (PageHighMem(page)) addr = (unsigned long)kmap_atomic(page); else -- cgit v1.2.3