summaryrefslogtreecommitdiff
path: root/drivers/android
diff options
context:
space:
mode:
authorTetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>2020-07-17 00:12:15 +0900
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2020-07-29 10:18:41 +0200
commit93f1e16af4a5135d6d4a6592ff14f43f4b0e5e2f (patch)
treed44ed2e00fc8bb933a7556bbb0c6a05393959540 /drivers/android
parent35728cac176a1cc1c57aa87a1e55e95a0ee3e985 (diff)
binder: Don't use mmput() from shrinker function.
commit f867c771f98891841c217fa8459244ed0dd28921 upstream. syzbot is reporting that mmput() from shrinker function has a risk of deadlock [1], for delayed_uprobe_add() from update_ref_ctr() calls kzalloc(GFP_KERNEL) with delayed_uprobe_lock held, and uprobe_clear_state() from __mmput() also holds delayed_uprobe_lock. Commit a1b2289cef92ef0e ("android: binder: drop lru lock in isolate callback") replaced mmput() with mmput_async() in order to avoid sleeping with spinlock held. But this patch replaces mmput() with mmput_async() in order not to start __mmput() from shrinker context. [1] https://syzkaller.appspot.com/bug?id=bc9e7303f537c41b2b0cc2dfcea3fc42964c2d45 Reported-by: syzbot <syzbot+1068f09c44d151250c33@syzkaller.appspotmail.com> Reported-by: syzbot <syzbot+e5344baa319c9a96edec@syzkaller.appspotmail.com> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reviewed-by: Michal Hocko <mhocko@suse.com> Acked-by: Todd Kjos <tkjos@google.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/4ba9adb2-43f5-2de0-22de-f6075c1fab50@i-love.sakura.ne.jp Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'drivers/android')
-rw-r--r--drivers/android/binder_alloc.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 7067d5542a82..2048ba6c8b08 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -948,7 +948,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
trace_binder_unmap_user_end(alloc, index);
}
up_read(&mm->mmap_sem);
- mmput(mm);
+ mmput_async(mm);
trace_binder_unmap_kernel_start(alloc, index);