summaryrefslogtreecommitdiff
path: root/mm/hugetlb.c
diff options
context:
space:
mode:
authorNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>2010-09-08 10:19:38 +0900
committerAndi Kleen <ak@linux.intel.com>2010-10-08 09:32:45 +0200
commit8c6c2ecb44667f7204e9d2b89c4c1f42edc5a196 (patch)
tree82dcfddae57bc34c04be1b044b363534082d8ada /mm/hugetlb.c
parenta9869b837c098732bad84939015c0eb391b23e41 (diff)
HWPOSION, hugetlb: recover from free hugepage error when !MF_COUNT_INCREASED
Currently error recovery for free hugepage works only for MF_COUNT_INCREASED. This patch enables !MF_COUNT_INCREASED case. Free hugepages can be handled directly by alloc_huge_page() and dequeue_hwpoisoned_huge_page(), and both of them are protected by hugetlb_lock, so there is no race between them. Note that this patch defines the refcount of HWPoisoned hugepage dequeued from freelist is 1, deviated from present 0, thereby we can avoid race between unpoison and memory failure on free hugepage. This is reasonable because unlikely to free buddy pages, free hugepage is governed by hugetlbfs even after error handling finishes. And it also makes unpoison code added in the later patch cleaner. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Acked-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andi Kleen <ak@linux.intel.com>
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r--mm/hugetlb.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 636be5d6aadd..7123270bfb38 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2974,6 +2974,7 @@ int dequeue_hwpoisoned_huge_page(struct page *hpage)
spin_lock(&hugetlb_lock);
if (is_hugepage_on_freelist(hpage)) {
list_del(&hpage->lru);
+ set_page_refcounted(hpage);
h->free_huge_pages--;
h->free_huge_pages_node[nid]--;
ret = 0;