diff options
author | Anson Huang <Anson.Huang@nxp.com> | 2017-03-02 17:39:12 +0800 |
---|---|---|
committer | Jason Liu <jason.hui.liu@nxp.com> | 2019-02-12 10:26:17 +0800 |
commit | e5d617d4c16b32340005a5d400d2cc4177eafe2b (patch) | |
tree | 3e2c98a5e11bcb08b4c4b58bf3e73f84685a32c2 /arch/arm/mach-imx | |
parent | 857c81eeba749b886abe6e34717b9f1c90c6b02e (diff) |
MLK-14308 ARM: imx: fix race condition of multi-cores low power idle on i.mx7d
On i.MX7D low power idle, consider below scenario which has
race condition that low power idle is entered unexpectedly
for first CPU:
CPU#1 enters low power idle:
1. set last_cpu to invalid -1;
2. set cpu1_wfi in low level ASM code;
3. enter WFI;
CPU#0 enters low power idle:
4. set last_cpu to CPU#0;
5. Set hardware(DDR, CCM, ANATOP) to low power idle mode;
6. enter WFI;
If during 4~6 window, CPU#1 go out of WFI and then go into low
power idle again, the condition check of master_lpi will be true
and CPU#1 will go through 4~6 steps in low level ASM code,
which is unexpected. As cpu_cluster_pm_enter/exit can only be called
once for last cpu in same cluster.
To avoid this race condition, add last_cpu check as well as master_lpi
check, that means if last_cpu is a valid value, the other CPU entering
low power idle will be treated as first CPU. And also move the setting
of last_cpu to invalid value to last CPU low power idle exit path.
Signed-off-by: Anson Huang <Anson.Huang@nxp.com>
Diffstat (limited to 'arch/arm/mach-imx')
-rw-r--r-- | arch/arm/mach-imx/cpuidle-imx7d.c | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/arch/arm/mach-imx/cpuidle-imx7d.c b/arch/arm/mach-imx/cpuidle-imx7d.c index 83d1f48ed0d5..34d3bd4cdd64 100644 --- a/arch/arm/mach-imx/cpuidle-imx7d.c +++ b/arch/arm/mach-imx/cpuidle-imx7d.c @@ -105,13 +105,8 @@ static int imx7d_enter_low_power_idle(struct cpuidle_device *dev, } else { imx_gpcv2_set_lpm_mode(WAIT_UNCLOCKED); cpu_pm_enter(); - - if (atomic_inc_return(&master_lpi) < num_online_cpus()) { - imx_set_cpu_jump(dev->cpu, ca7_cpu_resume); - /* initialize the last cpu id to invalid here */ - cpuidle_pm_info->last_cpu = -1; - cpu_suspend(0, imx7d_idle_finish); - } else { + if (atomic_inc_return(&master_lpi) == num_online_cpus() && + cpuidle_pm_info->last_cpu == -1) { imx_gpcv2_set_cpu_power_gate_in_idle(true); cpu_cluster_pm_enter(); @@ -120,6 +115,11 @@ static int imx7d_enter_low_power_idle(struct cpuidle_device *dev, cpu_cluster_pm_exit(); imx_gpcv2_set_cpu_power_gate_in_idle(false); + /* initialize the last cpu id to invalid here */ + cpuidle_pm_info->last_cpu = -1; + } else { + imx_set_cpu_jump(dev->cpu, ca7_cpu_resume); + cpu_suspend(0, imx7d_idle_finish); } atomic_dec(&master_lpi); |