diff options
author | Li Zefan <lizefan@huawei.com> | 2014-06-30 11:49:58 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-07-17 16:23:18 -0700 |
commit | f1a6b5ddc318c0722e70715ef3a53e53981d606c (patch) | |
tree | f4bfbbee6da1cf266a03246359292fe3db18553e /kernel | |
parent | 7a072684fbe6d8be0156109ed2954a36d42e53c4 (diff) |
cgroup: fix mount failure in a corner case
commit 970317aa48c6ef66cd023c039c2650c897bad927 upstream.
# cat test.sh
#! /bin/bash
mount -t cgroup -o cpu xxx /cgroup
umount /cgroup
mount -t cgroup -o cpu,cpuacct xxx /cgroup
umount /cgroup
# ./test.sh
mount: xxx already mounted or /cgroup busy
mount: according to mtab, xxx is already mounted on /cgroup
It's because the cgroupfs_root of the first mount was under destruction
asynchronously.
Fix this by delaying and then retrying mount for this case.
v3:
- put the refcnt immediately after getting it. (Tejun)
v2:
- use percpu_ref_tryget_live() rather that introducing
percpu_ref_alive(). (Tejun)
- adjust comment.
tj: Updated the comment a bit.
Signed-off-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
[lizf: Backported to 3.15:
- s/percpu_ref_tryget_live/atomic_inc_not_zero/
- Use goto instead of calling restart_syscall()
- Add cgroup_tree_mutex]
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/cgroup.c | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/kernel/cgroup.c b/kernel/cgroup.c index ceee0c54c6a4..17a0a7ea692e 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -1484,10 +1484,12 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, int flags, const char *unused_dev_name, void *data) { + struct cgroup_subsys *ss; struct cgroup_root *root; struct cgroup_sb_opts opts; struct dentry *dentry; int ret; + int i; bool new_sb; /* @@ -1514,6 +1516,29 @@ retry: goto out_unlock; } + /* + * Destruction of cgroup root is asynchronous, so subsystems may + * still be dying after the previous unmount. Let's drain the + * dying subsystems. We just need to ensure that the ones + * unmounted previously finish dying and don't care about new ones + * starting. Testing ref liveliness is good enough. + */ + for_each_subsys(ss, i) { + if (!(opts.subsys_mask & (1 << i)) || + ss->root == &cgrp_dfl_root) + continue; + + if (!atomic_inc_not_zero(&ss->root->cgrp.refcnt)) { + mutex_unlock(&cgroup_mutex); + mutex_unlock(&cgroup_tree_mutex); + msleep(10); + mutex_lock(&cgroup_tree_mutex); + mutex_lock(&cgroup_mutex); + goto retry; + } + cgroup_put(&ss->root->cgrp); + } + for_each_root(root) { bool name_match = false; |