summaryrefslogtreecommitdiff
path: root/mm/mlock.c
diff options
context:
space:
mode:
authorHugh Dickins <hugh.dickins@tiscali.co.uk>2009-12-30 23:00:30 +0000
committerGreg Kroah-Hartman <gregkh@suse.de>2010-01-06 15:05:22 -0800
commit8ac9e802007e99534ec66af850ef7b099df27499 (patch)
treefde720a1f13b82f58f10edd51d583eb293e305e6 /mm/mlock.c
parentb2ea8cb9c8f1937cb80b9beb50548a05bfc37819 (diff)
ksm: fix mlockfreed to munlocked
2.6.33-rc1 commit 73848b4684e84a84cfd1555af78d41158f31e16b, adjusted to include 31e855ea7173bdb0520f9684580423a9560f66e0's movement of the unlock_page(oldpage), but omit other intervening cleanups. When KSM merges an mlocked page, it has been forgetting to munlock it: that's been left to free_page_mlock(), which reports it in /proc/vmstat as unevictable_pgs_mlockfreed instead of unevictable_pgs_munlocked, which indicates that such pages _might_ be left unevictable for long after they should be evictable. Call munlock_vma_page() to fix that. Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'mm/mlock.c')
-rw-r--r--mm/mlock.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/mlock.c b/mm/mlock.c
index bd6f0e466f6c..2e05c97b04f5 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -99,14 +99,14 @@ void mlock_vma_page(struct page *page)
* not get another chance to clear PageMlocked. If we successfully
* isolate the page and try_to_munlock() detects other VM_LOCKED vmas
* mapping the page, it will restore the PageMlocked state, unless the page
- * is mapped in a non-linear vma. So, we go ahead and SetPageMlocked(),
+ * is mapped in a non-linear vma. So, we go ahead and ClearPageMlocked(),
* perhaps redundantly.
* If we lose the isolation race, and the page is mapped by other VM_LOCKED
* vmas, we'll detect this in vmscan--via try_to_munlock() or try_to_unmap()
* either of which will restore the PageMlocked state by calling
* mlock_vma_page() above, if it can grab the vma's mmap sem.
*/
-static void munlock_vma_page(struct page *page)
+void munlock_vma_page(struct page *page)
{
BUG_ON(!PageLocked(page));