summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorFengguang Wu <fengguang.wu@intel.com>2012-12-18 14:23:28 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2012-12-18 15:02:15 -0800
commitd37dd5dcb955dd8c2cdd4eaef1f15d1b7ecbc379 (patch)
tree604209843c03b53a6c74f90cdb7d000914213066 /mm
parentdc053733ea44babedb20266300b984d6add8b9e5 (diff)
vmscan: comment too_many_isolated()
Comment "Why it's doing so" rather than "What it does" as proposed by Andrew Morton. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: Minchan Kim <minchan.kim@gmail.com> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c6
1 files changed, 5 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7f3096137b8a..e73d0206dddd 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1177,7 +1177,11 @@ int isolate_lru_page(struct page *page)
}
/*
- * Are there way too many processes in the direct reclaim path already?
+ * A direct reclaimer may isolate SWAP_CLUSTER_MAX pages from the LRU list and
+ * then get resheduled. When there are massive number of tasks doing page
+ * allocation, such sleeping direct reclaimers may keep piling up on each CPU,
+ * the LRU list will go small and be scanned faster than necessary, leading to
+ * unnecessary swapping, thrashing and OOM.
*/
static int too_many_isolated(struct zone *zone, int file,
struct scan_control *sc)