summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorAlex,Shi <alex.shi@intel.com>2010-06-17 14:08:13 +0800
committerGreg Kroah-Hartman <gregkh@suse.de>2010-08-02 10:20:52 -0700
commit70ba76a0c43be02e8d7b26a35f7fe6d86c360ccd (patch)
treeb4cc9d087134160f538a16f146b3d91e37be61a8 /kernel
parent2d216ac392eea6c6b85d038dc7788bfe44b77040 (diff)
sched: Fix over-scheduling bug
commit 3c93717cfa51316e4dbb471e7c0f9d243359d5f8 upstream. Commit e70971591 ("sched: Optimize unused cgroup configuration") introduced an imbalanced scheduling bug. If we do not use CGROUP, function update_h_load won't update h_load. When the system has a large number of tasks far more than logical CPU number, the incorrect cfs_rq[cpu]->h_load value will cause load_balance() to pull too many tasks to the local CPU from the busiest CPU. So the busiest CPU keeps going in a round robin. That will hurt performance. The issue was found originally by a scientific calculation workload that developed by Yanmin. With that commit, the workload performance drops about 40%. CPU before after 00 : 2 : 7 01 : 1 : 7 02 : 11 : 6 03 : 12 : 7 04 : 6 : 6 05 : 11 : 7 06 : 10 : 6 07 : 12 : 7 08 : 11 : 6 09 : 12 : 6 10 : 1 : 6 11 : 1 : 6 12 : 6 : 6 13 : 2 : 6 14 : 2 : 6 15 : 1 : 6 Reviewed-by: Yanmin zhang <yanmin.zhang@intel.com> Signed-off-by: Alex Shi <alex.shi@intel.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1276754893.9452.5442.camel@debian> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched.c3
1 files changed, 0 insertions, 3 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index e1cfb6c4f644..a0157c72c8f9 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -1717,9 +1717,6 @@ static void update_shares_locked(struct rq *rq, struct sched_domain *sd)
static void update_h_load(long cpu)
{
- if (root_task_group_empty())
- return;
-
walk_tg_tree(tg_load_down, tg_nop, (void *)cpu);
}