summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorMiklos Szeredi <mszeredi@suse.cz>2009-03-23 16:07:24 +0100
committerGreg Kroah-Hartman <gregkh@suse.de>2009-05-02 10:24:58 -0700
commit953e45c45cf3daa1037fac03246e2fabc088ba0b (patch)
treed5de6e36e1e5951bb16fb72fe739041c0eaa0cd7 /kernel
parent9460a617660c1d5f3d6fdf0f6163939a67ed7f9c (diff)
fix ptrace slowness
commit 53da1d9456fe7f87a920a78fdbdcf1225d197cb7 upstream. This patch fixes bug #12208: Bug-Entry : http://bugzilla.kernel.org/show_bug.cgi?id=12208 Subject : uml is very slow on 2.6.28 host This turned out to be not a scheduler regression, but an already existing problem in ptrace being triggered by subtle scheduler changes. The problem is this: - task A is ptracing task B - task B stops on a trace event - task A is woken up and preempts task B - task A calls ptrace on task B, which does ptrace_check_attach() - this calls wait_task_inactive(), which sees that task B is still on the runq - task A goes to sleep for a jiffy - ... Since UML does lots of the above sequences, those jiffies quickly add up to make it slow as hell. This patch solves this by not rescheduling in read_unlock() after ptrace_stop() has woken up the tracer. Thanks to Oleg Nesterov and Ingo Molnar for the feedback. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/signal.c8
1 files changed, 8 insertions, 0 deletions
diff --git a/kernel/signal.c b/kernel/signal.c
index 3d161f0025c2..7d0a222936f5 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -1549,7 +1549,15 @@ static void ptrace_stop(int exit_code, int clear_code, siginfo_t *info)
read_lock(&tasklist_lock);
if (may_ptrace_stop()) {
do_notify_parent_cldstop(current, CLD_TRAPPED);
+ /*
+ * Don't want to allow preemption here, because
+ * sys_ptrace() needs this task to be inactive.
+ *
+ * XXX: implement read_unlock_no_resched().
+ */
+ preempt_disable();
read_unlock(&tasklist_lock);
+ preempt_enable_no_resched();
schedule();
} else {
/*