summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorHugh Dickins <hugh.dickins@tiscali.co.uk>2010-01-29 17:46:34 +0000
committerLinus Torvalds <torvalds@linux-foundation.org>2010-01-29 10:28:09 -0800
commita7016235a61d520e6806f38129001d935c4b6661 (patch)
treecfd42cc6f0c825c8d9033e91d6d971ea549520e3
parent67f15b06c1a7e5417b7042b515ca2695de30beda (diff)
mm: fix migratetype bug which slowed swapping
After memory pressure has forced it to dip into the reserves, 2.6.32's 5f8dcc21211a3d4e3a7a5ca366b469fb88117f61 "page-allocator: split per-cpu list into one-list-per-migrate-type" has been returning MIGRATE_RESERVE pages to the MIGRATE_MOVABLE free_list: in some sense depleting reserves. Fix that in the most straightforward way (which, considering the overheads of alternative approaches, is Mel's preference): the right migratetype is already in page_private(page), but free_pcppages_bulk() wasn't using it. How did this bug show up? As a 20% slowdown in my tmpfs loop kbuild swapping tests, on PowerMac G5 with SLUB allocator. Bisecting to that commit was easy, but explaining the magnitude of the slowdown not easy. The same effect appears, but much less markedly, with SLAB, and even less markedly on other machines (the PowerMac divides into fewer zones than x86, I think that may be a factor). We guess that lumpy reclaim of short-lived high-order pages is implicated in some way, and probably this bug has been tickling a poor decision somewhere in page reclaim. But instrumentation hasn't told me much, I've run out of time and imagination to determine exactly what's going on, and shouldn't hold up the fix any longer: it's valid, and might even fix other misbehaviours. Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/page_alloc.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d2a8889b4c58..8deb9d0fd5b1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -556,8 +556,9 @@ static void free_pcppages_bulk(struct zone *zone, int count,
page = list_entry(list->prev, struct page, lru);
/* must delete as __free_one_page list manipulates */
list_del(&page->lru);
- __free_one_page(page, zone, 0, migratetype);
- trace_mm_page_pcpu_drain(page, 0, migratetype);
+ /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
+ __free_one_page(page, zone, 0, page_private(page));
+ trace_mm_page_pcpu_drain(page, 0, page_private(page));
} while (--count && --batch_free && !list_empty(list));
}
spin_unlock(&zone->lock);