summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2011-11-04Merge branch 'imx_2.6.38' into imx_2.6.38_androidZhang Jiejing
Conflicts: arch/arm/mach-mx6/board-mx6q_sabreauto.c arch/arm/mach-mx6/cpu_op-mx6.c arch/arm/plat-mxc/dvfs_core.c drivers/cpufreq/cpufreq_stats.c
2011-11-03smp_call_function_many: handle concurrent clearing of maskMilton Miller
commit 723aae25d5cdb09962901d36d526b44d4be1051c upstream. Mike Galbraith reported finding a lockup ("perma-spin bug") where the cpumask passed to smp_call_function_many was cleared by other cpu(s) while a cpu was preparing its call_data block, resulting in no cpu to clear the last ref and unlock the block. Having cpus clear their bit asynchronously could be useful on a mask of cpus that might have a translation context, or cpus that need a push to complete an rcu window. Instead of adding a BUG_ON and requiring yet another cpumask copy, just detect the race and handle it. Note: arch_send_call_function_ipi_mask must still handle an empty cpumask because the data block is globally visible before the that arch callback is made. And (obviously) there are no guarantees to which cpus are notified if the mask is changed during the call; only cpus that were online and had their mask bit set during the whole call are guaranteed to be called. Reported-by: Mike Galbraith <efault@gmx.de> Reported-by: Jan Beulich <JBeulich@novell.com> Acked-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> (cherry picked from commit d1342fed517f89ad4c3d956d9e214d72c06bb7c1)
2011-11-03call_function_many: add missing orderingMilton Miller
commit 45a5791920ae643eafc02e2eedef1a58e341b736 upstream. Paul McKenney's review pointed out two problems with the barriers in the 2.6.38 update to the smp call function many code. First, a barrier that would force the func and info members of data to be visible before their consumption in the interrupt handler was missing. This can be solved by adding a smp_wmb between setting the func and info members and setting setting the cpumask; this will pair with the existing and required smp_rmb ordering the cpumask read before the read of refs. This placement avoids the need a second smp_rmb in the interrupt handler which would be executed on each of the N cpus executing the call request. (I was thinking this barrier was present but was not). Second, the previous write to refs (establishing the zero that we the interrupt handler was testing from all cpus) was performed by a third party cpu. This would invoke transitivity which, as a recient or concurrent addition to memory-barriers.txt now explicitly states, would require a full smp_mb(). However, we know the cpumask will only be set by one cpu (the data owner) and any preivous iteration of the mask would have cleared by the reading cpu. By redundantly writing refs to 0 on the owning cpu before the smp_wmb, the write to refs will follow the same path as the writes that set the cpumask, which in turn allows us to keep the barrier in the interrupt handler a smp_rmb instead of promoting it to a smp_mb (which will be be executed by N cpus for each of the possible M elements on the list). I moved and expanded the comment about our (ab)use of the rcu list primitives for the concurrent walk earlier into this function. I considered moving the first two paragraphs to the queue list head and lock, but felt it would have been too disconected from the code. Cc: Paul McKinney <paulmck@linux.vnet.ibm.com> Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> (cherry picked from commit 4718af584c9db2b2002a89b856462cedaaf3d7da)
2011-11-03call_function_many: fix list delete vs add raceMilton Miller
commit e6cd1e07a185d5f9b0aa75e020df02d3c1c44940 upstream. Peter pointed out there was nothing preventing the list_del_rcu in smp_call_function_interrupt from running before the list_add_rcu in smp_call_function_many. Fix this by not setting refs until we have gotten the lock for the list. Take advantage of the wmb in list_add_rcu to save an explicit additional one. I tried to force this race with a udelay before the lock & list_add and by mixing all 64 online cpus with just 3 random cpus in the mask, but was unsuccessful. Still, inspection shows a valid race, and the fix is a extension of the existing protection window in the current code. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> (cherry picked from commit cb8385e61fb736ef6748d305d868b28a9f649ef1)
2011-10-11Merge remote branch 'fsl-linux-sdk/imx_2.6.38' into imx_2.6.38_androidXinyu Chen
2011-09-29ENGR00156637 [MX6]Reboot take long time on SMPAnson Huang
Add work around to the reboot issue of SMP, with SMP, all the CPUs need to do _rcu_barrier, if we enqueue an rcu callback, we need to make sure CPU tick to stay alive until we take care of those by completing the appropriate grace period. This work around only work when the reboot command issue, so it didn't impact normal kernel feature. Signed-off-by: Anson Huang <b20788@freescale.com>
2011-09-16ENGR00156905 mx6q: add android msl part, pmem, etcZhang Jiejing
Android: fix android build error. fix build error Update the imx6_android_defconfig create an android.h for android devices. add mx6q fixup for pmem. Signed-off-by: Zhang Jiejing <jiejing.zhang@freescale.com>
2011-09-15Merge remote branch 'fsl-linux-sdk/imx_2.6.38' into 2.6.38-merge-testZhang Jiejing
Conflicts: drivers/misc/Kconfig drivers/misc/Makefile drivers/net/wireless/Makefile include/linux/mmc/mmc.h kernel/power/main.c
2011-08-04futex: Sanitize cmpxchg_futex_value_locked APIMichel Lespinasse
The cmpxchg_futex_value_locked API was funny in that it returned either the original, user-exposed futex value OR an error code such as -EFAULT. This was confusing at best, and could be a source of livelocks in places that retry the cmpxchg_futex_value_locked after trying to fix the issue by running fault_in_user_writeable(). This change makes the cmpxchg_futex_value_locked API more similar to the get_futex_value_locked one, returning an error code and updating the original value through a reference argument. Signed-off-by: Michel Lespinasse <walken@google.com> Acked-by: Chris Metcalf <cmetcalf@tilera.com> [tile] Acked-by: Tony Luck <tony.luck@intel.com> [ia64] Acked-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Michal Simek <monstr@monstr.eu> [microblaze] Acked-by: David Howells <dhowells@redhat.com> [frv] Cc: Darren Hart <darren@dvhart.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20110311024851.GC26122@google.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2011-04-01ENGR00133978 PM: add time sensitive debug function to suspend & resumeZhang Jiejing
There was some driver is slow on suspend/resume, but some embeded system like eReader,Cellphone are time sensitive,this commit will report the slow driver on suspend/resume, the default value is 500us(0.5ms) Also, the threshold can be change by modify '/sys/power/device_suspend_time_threshold' to change the threshold, it is in microsecond. The output is like: PM: device platform:soc-audio.2 suspend too slow, takes 606.696 msecs PM: device platform:mxc_sdc_fb.1 suspend too slow, takes 7.708 msecs the default state of suspend driver is default off, if you want to debug the suspend time, echo time in microsecond(u Second) to /sys/powe/device_suspend_time_threshold eg: I want to know which driver suspend & resume takes more that 0.5 ms (500 us), you can just : ehco 500 > /sys/power/device_suspend_time_threshold Signed-off-by: Zhang Jiejing <jiejing.zhang@freescale.com>
2011-03-29Print pending wakeup IRQ preventing suspend to dmesgTodd Poynor
Change-Id: I36f90735c75fb7c7ab1084775ec0d0ab02336e6e Signed-off-by: Todd Poynor <toddpoynor@google.com>
2011-03-29cgroup: leave cg_list valid upon cgroup_exitSimon Wilson
A thread/process in cgroup_attach_task() could have called list_del(&tsk->cg_list) after cgroup_exit() had already called list_del() on the same list. Since it only checked for !list_empty(&tsk->cg_list) before doing this, the list_del() call would thus be made twice. The solution is to leave tsk->cg_list in a valid state in cgroup_exit() with list_del_init(&tsk->cg_list), which leaves an empty list. Change-Id: I4e7c1d0665fced629f5ca033c18dd98afe080e0c Signed-off-by: Simon Wilson <simonwilson@google.com>
2011-03-29cgroup: Remove call to synchronize_rcu in cgroup_attach_taskColin Cross
synchronize_rcu can be very expensive, averaging 100 ms in some cases. In cgroup_attach_task, it is used to prevent a task->cgroups pointer dereferenced in an RCU read side critical section from being invalidated, by delaying the call to put_css_set until after an RCU grace period. To avoid the call to synchronize_rcu, make the put_css_set call rcu-safe by moving the deletion of the css_set links into free_css_set_work, scheduled by the rcu callback free_css_set_rcu. The decrement of the cgroup refcount is no longer synchronous with the call to put_css_set, which can result in the cgroup refcount staying positive after the last call to cgroup_attach_task returns. To allow the cgroup to be deleted with cgroup_rmdir synchronously after cgroup_attach_task, have rmdir check the refcount of all associated css_sets. If cgroup_rmdir is called on a cgroup for which the css_sets all have refcount zero but the cgroup refcount is nonzero, reuse the rmdir waitqueue to block the rmdir until free_css_set_work is called. Signed-off-by: Colin Cross <ccross@android.com>
2011-03-29cgroup: Set CGRP_RELEASABLE when adding to a cgroupColin Cross
Changes the meaning of CGRP_RELEASABLE to be set on any cgroup that has ever had a task or cgroup in it, or had css_get called on it. The bit is set in cgroup_attach_task, cgroup_create, and __css_get. It is not necessary to set the bit in cgroup_fork, as the task is either in the root cgroup, in which can never be released, or the task it was forked from already set the bit in croup_attach_task. Signed-off-by: Colin Cross <ccross@android.com>
2011-03-29sched: use the old min_vruntime when normalizing on dequeueDima Zavin
After pulling the thread off the run-queue during a cgroup change, the cfs_rq.min_vruntime gets recalculated. The dequeued thread's vruntime then gets normalized to this new value. This can then lead to the thread getting an unfair boost in the new group if the vruntime of the next task in the old run-queue was way further ahead. Cc: Arve Hjønnevåg <arve@android.com> Signed-off-by: Dima Zavin <dima@android.com>
2011-03-29power: wakelock: call __get_wall_to_monotonic() instead of using ↵Erik Gilling
wall_to_monotonic Change-Id: I9e9c3b923bf9a22ffd48f80a72050289496e57d8
2011-03-29wakelock: Fix operator precedence bugColin Cross
Change-Id: I21366ace371d1b8f4684ddbe4ea8d555a926ac21 Signed-off-by: Colin Cross <ccross@google.com>
2011-03-29scheduler: cpuacct: Enable platform callbacks for cpuacct power trackingMike Chan
Platform must register cpu power function that return power in milliWatt seconds. Change-Id: I1caa0335e316c352eee3b1ddf326fcd4942bcbe8 Signed-off-by: Mike Chan <mike@android.com>
2011-03-29scheduler: cpuacct: Enable platform hooks to track cpuusage for CPU frequenciesMike Chan
Introduce new platform callback hooks for cpuacct for tracking CPU frequencies Not all platforms / architectures have a set CPU_FREQ_TABLE defined for CPU transition speeds. In order to track time spent in at various CPU frequencies, we enable platform callbacks from cpuacct for this accounting. Architectures that support overclock boosting, or don't have pre-defined frequency tables can implement their own bucketing system that makes sense given their cpufreq scaling abilities. New file: cpuacct.cpufreq reports the CPU time (in nanoseconds) spent at each CPU frequency. Change-Id: I10a80b3162e6fff3a8a2f74dd6bb37e88b12ba96 Signed-off-by: Mike Chan <mike@android.com>
2011-03-29sched: Add a generic notifier when a task struct is about to be freedSan Mehat
This patch adds a notifier which can be used by subsystems that may be interested in when a task has completely died and is about to have it's last resource freed. The Android lowmemory killer uses this to determine when a task it has killed has finally given up its goods. Signed-off-by: San Mehat <san@google.com>
2011-03-29power: wakelock: Print active wakelocks when has_wake_lock() is calledMike Chan
When DEBUG_SUSPEND is enabled print active wakelocks when we check if there are any active wakelocks. In print_active_locks(), print expired wakelocks if DEBUG_EXPIRE is enabled Change-Id: Ib1cb795555e71ff23143a2bac7c8a58cbce16547 Signed-off-by: Mike Chan <mike@android.com>
2011-03-29kernel: printk: Add non exported function for clearing the log ring bufferSan Mehat
Signed-off-by: San Mehat <san@google.com>
2011-03-29printk: Fix log_buf_copy termination.Arve Hjønnevåg
If idx was non-zero and the log had wrapped, len did not get truncated to stop at the last byte written to the log.
2011-03-29Revert "printk: remove unused code from kernel/printk.c"Arve Hjønnevåg
This reverts commit acff181d3574244e651913df77332e897b88bff4.
2011-03-29Fix cgroup: Add generic cgroup subsystem permission checks.Colin Cross
Change-Id: Ib82f6a716686a3ebb4592112400fc5e4a2ce066c Signed-off-by: Colin Cross <ccross@android.com>
2011-03-29cgroup: Add generic cgroup subsystem permission checks.San Mehat
Rather than using explicit euid == 0 checks when trying to move tasks into a cgroup via CFS, move permission checks into each specific cgroup subsystem. If a subsystem does not specify a 'can_attach' handler, then we fall back to doing our checks the old way. This way non-root processes can add arbitrary processes to a cgroup if all the registered subsystems on that cgroup agree. Also change explicit euid == 0 check to CAP_SYS_ADMIN Signed-off-by: San Mehat <san@google.com>
2011-03-29PM: earlysuspend: Removing dependence on console.Rebecca Schultz
Rather than signaling a full update of the display from userspace via a console switch, this patch introduces 2 files int /sys/power, wait_for_fb_sleep and wait_for_fb_wake. Reading these files will block until the requested state has been entered. When a read from wait_for_fb_sleep returns userspace should stop drawing. When wait_for_fb_wake returns, it should do a full update. If either are called when the fb driver is already in the requested state, they will return immediately. Signed-off-by: Rebecca Schultz <rschultz@google.com> Signed-off-by: Arve Hjønnevåg <arve@android.com>
2011-03-29consoleearlysuspend: Fix for 2.6.32Arve Hjønnevåg
vt_waitactive now needs a 1 based console number Change-Id: I07ab9a3773c93d67c09d928c8d5494ce823ffa2e
2011-03-29PM: earlysuspend: Add console switch when user requested sleep state changes.Arve Hjønnevåg
Signed-off-by: Arve Hjønnevåg <arve@android.com>
2011-03-29PM: wakelock: Don't dump unfrozen task list when aborting ↵Arve Hjønnevåg
try_to_freeze_tasks after less than one second Change-Id: Ib2976e5b97a5ee4ec9abd4d4443584d9257d0941 Signed-off-by: Arve Hjønnevåg <arve@android.com>
2011-03-29PM: wakelock: Abort task freezing if a wake lock is held.Arve Hjønnevåg
Avoids a problem where the device sometimes hangs for 20 seconds before the screen is turned on.
2011-03-29PM: Add user-space wake lock api.Arve Hjønnevåg
This adds /sys/power/wake_lock and /sys/power/wake_unlock. Writing a string to wake_lock creates a wake lock the first time is sees a string and locks it. Optionally, the string can be followed by a timeout. To unlock the wake lock, write the same string to wake_unlock. Change-Id: I66c6e3fe6487d17f9c2fafde1174042e57d15cd7
2011-03-29PM: Enable early suspend through /sys/power/stateArve Hjønnevåg
If EARLYSUSPEND is enabled then writes to /sys/power/state no longer blocks, and the kernel will try to enter the requested state every time no wakelocks are held. Write "on" to resume normal operation.
2011-03-29PM: Implement early suspend apiArve Hjønnevåg
2011-03-29PM: wakelocks: Use seq_file for /proc/wakelocks so we can get more than 3K ↵Arve Hjønnevåg
of stats. Change-Id: I42ed8bea639684f7a8a95b2057516764075c6b01 Signed-off-by: Arve Hjønnevåg <arve@android.com>
2011-03-29power: wakelocks: fix buffer overflow in print_wake_locksErik Gilling
Change-Id: Ic944e3b3d3bc53eddc6fd0963565fd072cac373c Signed-off-by: Erik Gilling <konkers@android.com>
2011-03-29power: Prevent spinlock recursion when wake_unlock() is calledMike Chan
Signed-off-by: Mike Chan <mike@android.com>
2011-03-29PM: Implement wakelock api.Arve Hjønnevåg
PM: wakelock: Replace expire work with a timer The expire work function did not work in the normal case. Signed-off-by: Arve Hjønnevåg <arve@android.com>
2011-03-29sched: Enable might_sleep before initializing drivers.Arve Hjønnevåg
This allows detection of init bugs in built-in drivers. Signed-off-by: Arve Hjønnevåg <arve@android.com>
2011-03-29Add build option to to set the default panic timeout.Arve Hjønnevåg
2011-03-29mm: Add min_free_order_shift tunable.Arve Hjønnevåg
By default the kernel tries to keep half as much memory free at each order as it does for one order below. This can be too agressive when running without swap. Change-Id: I5efc1a0b50f41ff3ac71e92d2efd175dedd54ead Signed-off-by: Arve Hjønnevåg <arve@android.com>
2011-03-29ARM: Make low-level printk workTony Lindgren
Makes low-level printk work. Signed-off-by: Tony Lindgren <tony@atomide.com>
2011-03-14Merge branch 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6Linus Torvalds
* 'bugfixes' of git://git.linux-nfs.org/projects/trondmy/nfs-2.6: NFS: NFSROOT should default to "proto=udp" nfs4: remove duplicated #include NFSv4: nfs4_state_mark_reclaim_nograce() should be static NFSv4: Fix the setlk error handler NFSv4.1: Fix the handling of the SEQUENCE status bits NFSv4/4.1: Fix nfs4_schedule_state_recovery abuses NFSv4.1 reclaim complete must wait for completion NFSv4: remove duplicate clientid in struct nfs_client NFSv4.1: Retry CREATE_SESSION on NFS4ERR_DELAY sunrpc: Propagate errors from xs_bind() through xs_create_sock() (try3-resend) Fix nfs_compat_user_ino64 so it doesn't cause problems if bit 31 or 63 are set in fileid nfs: fix compilation warning nfs: add kmalloc return value check in decode_and_add_ds SUNRPC: Remove resource leak in svc_rdma_send_error() nfs: close NFSv4 COMMIT vs. CLOSE race SUNRPC: Close a race in __rpc_wait_for_completion_task()
2011-03-10Merge branch 'sched-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: sched: Fix sched rt group scheduling when hierachy is enabled
2011-03-10SUNRPC: Close a race in __rpc_wait_for_completion_task()Trond Myklebust
Although they run as rpciod background tasks, under normal operation (i.e. no SIGKILL), functions like nfs_sillyrename(), nfs4_proc_unlck() and nfs4_do_close() want to be fully synchronous. This means that when we exit, we want all references to the rpc_task to be gone, and we want any dentry references etc. held by that task to be released. For this reason these functions call __rpc_wait_for_completion_task(), followed by rpc_put_task() in the expectation that the latter will be releasing the last reference to the rpc_task, and thus ensuring that the callback_ops->rpc_release() has been called synchronously. This patch fixes a race which exists due to the fact that rpciod calls rpc_complete_task() (in order to wake up the callers of __rpc_wait_for_completion_task()) and then subsequently calls rpc_put_task() without ensuring that these two steps are done atomically. In order to avoid adding new spin locks, the patch uses the existing waitqueue spin lock to order the rpc_task reference count releases between the waiting process and rpciod. The common case where nobody is waiting for completion is optimised for by checking if the RPC_TASK_ASYNC flag is cleared and/or if the rpc_task reference count is 1: in those cases we drop trying to grab the spin lock, and immediately free up the rpc_task. Those few processes that need to put the rpc_task from inside an asynchronous context and that do not care about ordering are given a new helper: rpc_put_task_async(). Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2011-03-09Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6: nd->inode is not set on the second attempt in path_walk() unfuck proc_sysctl ->d_compare() minimal fix for do_filp_open() race
2011-03-08unfuck proc_sysctl ->d_compare()Al Viro
a) struct inode is not going to be freed under ->d_compare(); however, the thing PROC_I(inode)->sysctl points to just might. Fortunately, it's enough to make freeing that sucker delayed, provided that we don't step on its ->unregistering, clear the pointer to it in PROC_I(inode) before dropping the reference and check if it's NULL in ->d_compare(). b) I'm not sure that we *can* walk into NULL inode here (we recheck dentry->seq between verifying that it's still hashed / fetching dentry->d_inode and passing it to ->d_compare() and there's no negative hashed dentries in /proc/sys/*), but if we can walk into that, we really should not have ->d_compare() return 0 on it! Said that, I really suspect that this check can be simply killed. Nick? Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-03-04cpuset: add a missing unlock in cpuset_write_resmask()Li Zefan
Don't forget to release cgroup_mutex if alloc_trial_cpuset() fails. [akpm@linux-foundation.org: avoid multiple return points] Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Cc: Paul Menage <menage@google.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Miao Xie <miaox@cn.fujitsu.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-04Mark ptrace_{traceme,attach,detach} staticLinus Torvalds
They are only used inside kernel/ptrace.c, and have been for a long time. We don't want to go back to the bad-old-days when architectures did things on their own, so make them static and private. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-04sched: Fix sched rt group scheduling when hierachy is enabledBalbir Singh
The current sched rt code is broken when it comes to hierarchical scheduling, this patch fixes two problems 1. It adds redundant enqueuing (harmless) when it finds a queue has tasks enqueued, but it has no run time and it is not throttled. 2. The most important change is in sched_rt_rq_enqueue/dequeue. The code just picks the rt_rq belonging to the current cpu on which the period timer runs, the patch fixes it, so that the correct rt_se is enqueued/dequeued. Tested with a simple hierarchy /c/d, c and d assigned similar runtimes of 50,000 and a while 1 loop runs within "d". Both c and d get throttled, without the patch, the task just stops running and never runs (depends on where the sched_rt b/w timer runs). With the patch, the task is throttled and runs as expected. [ bharata, suggestions on how to pick the rt_se belong to the rt_rq and correct cpu ] Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Bharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: stable@kernel.org LKML-Reference: <20110303113435.GA2868@balbir.in.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>