summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/ABI/testing/sysfs-class-extcon97
-rw-r--r--Documentation/ABI/testing/sysfs-devices-power35
-rw-r--r--Documentation/ABI/testing/sysfs-power59
-rw-r--r--Documentation/DMA-attributes.txt42
-rw-r--r--Documentation/HOWTO11
-rw-r--r--Documentation/android.txt121
-rw-r--r--Documentation/arm/nvidia/tegra_parameters.txt192
-rw-r--r--Documentation/cgroups/cgroups.txt9
-rw-r--r--Documentation/cpu-freq/governors.txt38
-rw-r--r--Documentation/device-mapper/dm-crypt.txt7
-rw-r--r--Documentation/devicetree/bindings/arm/arch_timer.txt28
-rw-r--r--Documentation/devicetree/bindings/arm/tegra/emc.txt78
-rw-r--r--Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-dvfs.txt50
-rw-r--r--Documentation/devicetree/bindings/pinctrl/nvidia,tegra114-pinmux.txt118
-rw-r--r--Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt132
-rw-r--r--Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt132
-rw-r--r--Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt128
-rw-r--r--Documentation/devicetree/bindings/pinmux/pinmux_nvidia.txt5
-rw-r--r--Documentation/dma-buf-sharing.txt98
-rw-r--r--Documentation/driver-model/devres.txt4
-rw-r--r--Documentation/edp/debugfs36
-rw-r--r--Documentation/edp/design155
-rw-r--r--Documentation/edp/dynamic-edp-capping36
-rw-r--r--Documentation/edp/governors84
-rw-r--r--Documentation/edp/howto200
-rw-r--r--Documentation/edp/sysfs41
-rw-r--r--Documentation/hid/uhid.txt169
-rw-r--r--Documentation/kernel-parameters.txt11
-rw-r--r--Documentation/pinctrl.txt94
-rw-r--r--Documentation/power/power_supply_class.txt2
-rw-r--r--Documentation/power/suspend-and-cpuhotplug.txt2
-rw-r--r--Documentation/thermal/cpu-cooling-api.txt32
-rw-r--r--Documentation/thermal/sysfs-api.txt103
-rw-r--r--Documentation/trace/tracedump.txt58
-rw-r--r--Documentation/trace/tracelevel.txt42
-rw-r--r--Documentation/video/tegra_dc_ext.txt83
-rw-r--r--Documentation/video4linux/README.tegra180
-rw-r--r--Documentation/workqueue.txt103
38 files changed, 2660 insertions, 155 deletions
diff --git a/Documentation/ABI/testing/sysfs-class-extcon b/Documentation/ABI/testing/sysfs-class-extcon
new file mode 100644
index 000000000000..20ab361bd8c6
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-extcon
@@ -0,0 +1,97 @@
+What: /sys/class/extcon/.../
+Date: February 2012
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ Provide a place in sysfs for the extcon objects.
+ This allows accessing extcon specific variables.
+ The name of extcon object denoted as ... is the name given
+ with extcon_dev_register.
+
+ One extcon device denotes a single external connector
+ port. An external connector may have multiple cables
+ attached simultaneously. Many of docks, cradles, and
+ accessory cables have such capability. For example,
+ the 30-pin port of Nuri board (/arch/arm/mach-exynos)
+ may have both HDMI and Charger attached, or analog audio,
+ video, and USB cables attached simulteneously.
+
+ If there are cables mutually exclusive with each other,
+ such binary relations may be expressed with extcon_dev's
+ mutually_exclusive array.
+
+What: /sys/class/extcon/.../name
+Date: February 2012
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/extcon/.../name shows the name of the extcon
+ object. If the extcon object has an optional callback
+ "show_name" defined, the callback will provide the name with
+ this sysfs node.
+
+What: /sys/class/extcon/.../state
+Date: February 2012
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/extcon/.../state shows and stores the cable
+ attach/detach information of the corresponding extcon object.
+ If the extcon object has an optional callback "show_state"
+ defined, the showing function is overriden with the optional
+ callback.
+
+ If the default callback for showing function is used, the
+ format is like this:
+ # cat state
+ USB_OTG=1
+ HDMI=0
+ TA=1
+ EAR_JACK=0
+ #
+ In this example, the extcon device have USB_OTG and TA
+ cables attached and HDMI and EAR_JACK cables detached.
+
+ In order to update the state of an extcon device, enter a hex
+ state number starting with 0x.
+ echo 0xHEX > state
+
+ This updates the whole state of the extcon dev.
+ Inputs of all the methods are required to meet the
+ mutually_exclusive contidions if they exist.
+
+ It is recommended to use this "global" state interface if
+ you need to enter the value atomically. The later state
+ interface associated with each cable cannot update
+ multiple cable states of an extcon device simultaneously.
+
+What: /sys/class/extcon/.../cable.x/name
+Date: February 2012
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/extcon/.../cable.x/name shows the name of cable
+ "x" (integer between 0 and 31) of an extcon device.
+
+What: /sys/class/extcon/.../cable.x/state
+Date: February 2012
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/extcon/.../cable.x/name shows and stores the
+ state of cable "x" (integer between 0 and 31) of an extcon
+ device. The state value is either 0 (detached) or 1
+ (attached).
+
+What: /sys/class/extcon/.../mutually_exclusive/...
+Date: December 2011
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ Shows the relations of mutually exclusiveness. For example,
+ if the mutually_exclusive array of extcon_dev is
+ {0x3, 0x5, 0xC, 0x0}, the, the output is:
+ # ls mutually_exclusive/
+ 0x3
+ 0x5
+ 0xc
+ #
+
+ Note that mutually_exclusive is a sub-directory of the extcon
+ device and the file names under the mutually_exclusive
+ directory show the mutually-exclusive sets, not the contents
+ of the files.
diff --git a/Documentation/ABI/testing/sysfs-devices-power b/Documentation/ABI/testing/sysfs-devices-power
index 840f7d64d483..45000f0db4d4 100644
--- a/Documentation/ABI/testing/sysfs-devices-power
+++ b/Documentation/ABI/testing/sysfs-devices-power
@@ -96,16 +96,26 @@ Description:
is read-only. If the device is not enabled to wake up the
system from sleep states, this attribute is not present.
-What: /sys/devices/.../power/wakeup_hit_count
-Date: September 2010
+What: /sys/devices/.../power/wakeup_abort_count
+Date: February 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
- The /sys/devices/.../wakeup_hit_count attribute contains the
+ The /sys/devices/.../wakeup_abort_count attribute contains the
number of times the processing of a wakeup event associated with
- the device might prevent the system from entering a sleep state.
- This attribute is read-only. If the device is not enabled to
- wake up the system from sleep states, this attribute is not
- present.
+ the device might have aborted system transition into a sleep
+ state in progress. This attribute is read-only. If the device
+ is not enabled to wake up the system from sleep states, this
+ attribute is not present.
+
+What: /sys/devices/.../power/wakeup_expire_count
+Date: February 2012
+Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Description:
+ The /sys/devices/.../wakeup_expire_count attribute contains the
+ number of times a wakeup event associated with the device has
+ been reported with a timeout that expired. This attribute is
+ read-only. If the device is not enabled to wake up the system
+ from sleep states, this attribute is not present.
What: /sys/devices/.../power/wakeup_active
Date: September 2010
@@ -148,6 +158,17 @@ Description:
not enabled to wake up the system from sleep states, this
attribute is not present.
+What: /sys/devices/.../power/wakeup_prevent_sleep_time_ms
+Date: February 2012
+Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Description:
+ The /sys/devices/.../wakeup_prevent_sleep_time_ms attribute
+ contains the total time the device has been preventing
+ opportunistic transitions to sleep states from occuring.
+ This attribute is read-only. If the device is not enabled to
+ wake up the system from sleep states, this attribute is not
+ present.
+
What: /sys/devices/.../power/autosuspend_delay_ms
Date: September 2010
Contact: Alan Stern <stern@rowland.harvard.edu>
diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
index b464d12761ba..31725ffeeb3a 100644
--- a/Documentation/ABI/testing/sysfs-power
+++ b/Documentation/ABI/testing/sysfs-power
@@ -172,3 +172,62 @@ Description:
Reading from this file will display the current value, which is
set to 1 MB by default.
+
+What: /sys/power/autosleep
+Date: April 2012
+Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Description:
+ The /sys/power/autosleep file can be written one of the strings
+ returned by reads from /sys/power/state. If that happens, a
+ work item attempting to trigger a transition of the system to
+ the sleep state represented by that string is queued up. This
+ attempt will only succeed if there are no active wakeup sources
+ in the system at that time. After every execution, regardless
+ of whether or not the attempt to put the system to sleep has
+ succeeded, the work item requeues itself until user space
+ writes "off" to /sys/power/autosleep.
+
+ Reading from this file causes the last string successfully
+ written to it to be returned.
+
+What: /sys/power/wake_lock
+Date: February 2012
+Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Description:
+ The /sys/power/wake_lock file allows user space to create
+ wakeup source objects and activate them on demand (if one of
+ those wakeup sources is active, reads from the
+ /sys/power/wakeup_count file block or return false). When a
+ string without white space is written to /sys/power/wake_lock,
+ it will be assumed to represent a wakeup source name. If there
+ is a wakeup source object with that name, it will be activated
+ (unless active already). Otherwise, a new wakeup source object
+ will be registered, assigned the given name and activated.
+ If a string written to /sys/power/wake_lock contains white
+ space, the part of the string preceding the white space will be
+ regarded as a wakeup source name and handled as descrived above.
+ The other part of the string will be regarded as a timeout (in
+ nanoseconds) such that the wakeup source will be automatically
+ deactivated after it has expired. The timeout, if present, is
+ set regardless of the current state of the wakeup source object
+ in question.
+
+ Reads from this file return a string consisting of the names of
+ wakeup sources created with the help of it that are active at
+ the moment, separated with spaces.
+
+
+What: /sys/power/wake_unlock
+Date: February 2012
+Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Description:
+ The /sys/power/wake_unlock file allows user space to deactivate
+ wakeup sources created with the help of /sys/power/wake_lock.
+ When a string is written to /sys/power/wake_unlock, it will be
+ assumed to represent the name of a wakeup source to deactivate.
+ If a wakeup source object of that name exists and is active at
+ the moment, it will be deactivated.
+
+ Reads from this file return a string consisting of the names of
+ wakeup sources created with the help of /sys/power/wake_lock
+ that are inactive at the moment, separated with spaces.
diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt
index 5c72eed89563..f50309081ac7 100644
--- a/Documentation/DMA-attributes.txt
+++ b/Documentation/DMA-attributes.txt
@@ -49,3 +49,45 @@ DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either
consistent or non-consistent memory as it sees fit. By using this API,
you are guaranteeing to the platform that you have all the correct and
necessary sync points for this memory in the driver.
+
+DMA_ATTR_NO_KERNEL_MAPPING
+--------------------------
+
+DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel
+virtual mapping for the allocated buffer. On some architectures creating
+such mapping is non-trivial task and consumes very limited resources
+(like kernel virtual address space or dma consistent address space).
+Buffers allocated with this attribute can be only passed to user space
+by calling dma_mmap_attrs(). By using this API, you are guaranteeing
+that you won't dereference the pointer returned by dma_alloc_attr(). You
+can threat it as a cookie that must be passed to dma_mmap_attrs() and
+dma_free_attrs(). Make sure that both of these also get this attribute
+set on each call.
+
+Since it is optional for platforms to implement
+DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the
+attribute and exhibit default behavior.
+
+DMA_ATTR_SKIP_CPU_SYNC
+----------------------
+
+By default dma_map_{single,page,sg} functions family transfer a given
+buffer from CPU domain to device domain. Some advanced use cases might
+require sharing a buffer between more than one device. This requires
+having a mapping created separately for each device and is usually
+performed by calling dma_map_{single,page,sg} function more than once
+for the given buffer with device pointer to each device taking part in
+the buffer sharing. The first call transfers a buffer from 'CPU' domain
+to 'device' domain, what synchronizes CPU caches for the given region
+(usually it means that the cache has been flushed or invalidated
+depending on the dma direction). However, next calls to
+dma_map_{single,page,sg}() for other devices will perform exactly the
+same sychronization operation on the CPU cache. CPU cache sychronization
+might be a time consuming operation, especially if the buffers are
+large, so it is highly recommended to avoid it if possible.
+DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of
+the CPU cache for the given buffer assuming that it has been already
+transferred to 'device' domain. This attribute can be also used for
+dma_unmap_{single,page,sg} functions family to force buffer to stay in
+device domain after releasing a mapping for it. Use this attribute with
+care!
diff --git a/Documentation/HOWTO b/Documentation/HOWTO
index 59c080f084ef..c9400a43abd4 100644
--- a/Documentation/HOWTO
+++ b/Documentation/HOWTO
@@ -11,7 +11,6 @@ If anything in this document becomes out of date, please send in patches
to the maintainer of this file, who is listed at the bottom of the
document.
-
Introduction
------------
@@ -52,7 +51,6 @@ possible about these standards ahead of time, as they are well
documented; do not expect people to adapt to you or your company's way
of doing things.
-
Legal Issues
------------
@@ -66,7 +64,6 @@ their statements on legal matters.
For common questions and answers about the GPL, please see:
http://www.gnu.org/licenses/gpl-faq.html
-
Documentation
------------
@@ -187,7 +184,7 @@ apply a patch.
If you do not know where you want to start, but you want to look for
some task to start doing to join into the kernel development community,
go to the Linux Kernel Janitor's project:
- http://kernelnewbies.org/KernelJanitors
+ http://kernelnewbies.org/KernelJanitors
It is a great place to start. It describes a list of relatively simple
problems that need to be cleaned up and fixed within the Linux kernel
source tree. Working with the developers in charge of this project, you
@@ -250,10 +247,10 @@ process is as follows:
release a new -rc kernel every week.
- Process continues until the kernel is considered "ready", the
process should last around 6 weeks.
- - Known regressions in each release are periodically posted to the
- linux-kernel mailing list. The goal is to reduce the length of
+ - Known regressions in each release are periodically posted to the
+ linux-kernel mailing list. The goal is to reduce the length of
that list to zero before declaring the kernel to be "ready," but, in
- the real world, a small number of regressions often remain at
+ the real world, a small number of regressions often remain at
release time.
It is worth mentioning what Andrew Morton wrote on the linux-kernel
diff --git a/Documentation/android.txt b/Documentation/android.txt
new file mode 100644
index 000000000000..72a62afdf202
--- /dev/null
+++ b/Documentation/android.txt
@@ -0,0 +1,121 @@
+ =============
+ A N D R O I D
+ =============
+
+Copyright (C) 2009 Google, Inc.
+Written by Mike Chan <mike@android.com>
+
+CONTENTS:
+---------
+
+1. Android
+ 1.1 Required enabled config options
+ 1.2 Required disabled config options
+ 1.3 Recommended enabled config options
+2. Contact
+
+
+1. Android
+==========
+
+Android (www.android.com) is an open source operating system for mobile devices.
+This document describes configurations needed to run the Android framework on
+top of the Linux kernel.
+
+To see a working defconfig look at msm_defconfig or goldfish_defconfig
+which can be found at http://android.git.kernel.org in kernel/common.git
+and kernel/msm.git
+
+
+1.1 Required enabled config options
+-----------------------------------
+After building a standard defconfig, ensure that these options are enabled in
+your .config or defconfig if they are not already. Based off the msm_defconfig.
+You should keep the rest of the default options enabled in the defconfig
+unless you know what you are doing.
+
+ANDROID_PARANOID_NETWORK
+ASHMEM
+CONFIG_FB_MODE_HELPERS
+CONFIG_FONT_8x16
+CONFIG_FONT_8x8
+CONFIG_YAFFS_SHORT_NAMES_IN_RAM
+DAB
+EARLYSUSPEND
+FB
+FB_CFB_COPYAREA
+FB_CFB_FILLRECT
+FB_CFB_IMAGEBLIT
+FB_DEFERRED_IO
+FB_TILEBLITTING
+HIGH_RES_TIMERS
+INOTIFY
+INOTIFY_USER
+INPUT_EVDEV
+INPUT_GPIO
+INPUT_MISC
+LEDS_CLASS
+LEDS_GPIO
+LOCK_KERNEL
+LkOGGER
+LOW_MEMORY_KILLER
+MISC_DEVICES
+NEW_LEDS
+NO_HZ
+POWER_SUPPLY
+PREEMPT
+RAMFS
+RTC_CLASS
+RTC_LIB
+SWITCH
+SWITCH_GPIO
+TMPFS
+UID_STAT
+UID16
+USB_FUNCTION
+USB_FUNCTION_ADB
+USER_WAKELOCK
+VIDEO_OUTPUT_CONTROL
+WAKELOCK
+YAFFS_AUTO_YAFFS2
+YAFFS_FS
+YAFFS_YAFFS1
+YAFFS_YAFFS2
+
+
+1.2 Required disabled config options
+------------------------------------
+CONFIG_YAFFS_DISABLE_LAZY_LOAD
+DNOTIFY
+
+
+1.3 Recommended enabled config options
+------------------------------
+ANDROID_PMEM
+ANDROID_RAM_CONSOLE
+ANDROID_RAM_CONSOLE_ERROR_CORRECTION
+SCHEDSTATS
+DEBUG_PREEMPT
+DEBUG_MUTEXES
+DEBUG_SPINLOCK_SLEEP
+DEBUG_INFO
+FRAME_POINTER
+CPU_FREQ
+CPU_FREQ_TABLE
+CPU_FREQ_DEFAULT_GOV_ONDEMAND
+CPU_FREQ_GOV_ONDEMAND
+CRC_CCITT
+EMBEDDED
+INPUT_TOUCHSCREEN
+I2C
+I2C_BOARDINFO
+LOG_BUF_SHIFT=17
+SERIAL_CORE
+SERIAL_CORE_CONSOLE
+
+
+2. Contact
+==========
+website: http://android.git.kernel.org
+
+mailing-lists: android-kernel@googlegroups.com
diff --git a/Documentation/arm/nvidia/tegra_parameters.txt b/Documentation/arm/nvidia/tegra_parameters.txt
new file mode 100644
index 000000000000..84baf2079994
--- /dev/null
+++ b/Documentation/arm/nvidia/tegra_parameters.txt
@@ -0,0 +1,192 @@
+This file documents NVIDIA Tegra specific sysfs and debugfs files and
+kernel module parameters.
+
+/sys/power/suspend/mode
+-----------------------
+
+Used to select the LP1 or LP0 power state during system suspend.
+# echo lp0 > /sys/kernel/debug/suspend_mode
+# echo lp1 > /sys/kernel/debug/suspend_mode
+
+/sys/module/cpuidle/parameters/power_down_in_idle
+------------------------------------------
+
+Used to enable/disable CPU power down in idle.
+# echo 1 > /sys/module/cpuidle/parameters/power_down_in_idle
+# echo 0 > /sys/module/cpuidle/parameters/power_down_in_idle
+
+/sys/kernel/debug/cpuidle/power_down_stats
+-----------------------------
+
+Contains CPU power down statistics.
+# cat /sys/kernel/debug/cpuidle/power_down_stats
+
+/sys/kernel/debug/powergate
+---------------------------
+
+Contains power gating state of different tegra blocks.
+
+# cat /sys/kernel/debug/powergate
+
+/sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/enable
+------------------------------------------------------
+
+Control hotplugging of cores.
+# echo 0 > /sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/enable
+# echo 1 > /sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/enable
+
+Cpuquiet supports the implementation of multiple policies in the form of
+governors. The balanced governor implements the exact same policy previously
+implemented as "auto hotplug". The behavior with regards to cores coming
+online/offline and switching between the LP and G cluster remain the same.
+
+/sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/no_lp
+-----------------------------------------------------
+
+Enable/disable shadow cluster.
+# echo 0 > /sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/no_lp
+# echo 1 > /sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/no_lp
+
+/sys/devices/system/cpu/cpuquiet/available_governors
+----------------------------------------------------
+
+List available governors.
+# cat /sys/devices/system/cpu/cpuquiet/available_governors
+
+/sys/devices/system/cpu/cpuquiet/current_governor
+-------------------------------------------------
+
+Set the current active cpuquiet governor.
+# echo [governor name] > /sys/devices/system/cpu/cpuquiet/current_governor
+
+/sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/idle_bottom_freq
+----------------------------------------------------------------
+
+Main cluster minimum frequency.
+
+/sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/idle_top_freq
+-------------------------------------------------------------
+
+Shadow cluster maximum frequency.
+
+/sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/down_delay
+----------------------------------------------------------
+
+Delay (in jiffies) for switching to shadow cluster.
+
+/sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/up_delay
+--------------------------------------------------------
+
+Delay for switching to main cluster.
+
+/sys/devices/system/cpu/cpuquiet/tegra_cpuquiet/mp_overhead
+-----------------------------------------------------------
+
+Multi-core overhead percentage for EDP limit calculation.
+
+/sys/devices/system/cpu/cpuquiet/balanced/balance_level
+-------------------------------------------------------
+
+Percentage of max speed considered to be in balance. Half of balanced
+speed is considered skewed. Requires balanced governor to be set active.
+
+/sys/devices/system/cpu/cpuquiet/balanced/down_delay
+----------------------------------------------------
+
+Delay for reducing cores. Requires balanced governor to be set active.
+
+/sys/devices/system/cpu/cpuquiet/balanced/up_delay
+--------------------------------------------------
+
+Delay for bringing additional cores online in main cluster. Requires
+balanced governor to be set active.
+
+/sys/kernel/debug/tegra_hotplug/stats
+-------------------------------------
+
+Contains hotplug statistics.
+
+/sys/kernel/cluster/active
+--------------------------
+
+Controls active CPU cluster: main (G) or shadow (LP).
+For manual control disable auto hotlug, enable immediate switch and
+possibly force switch to happen always:
+# echo 0 > /sys/module/cpu_tegra3/parameters/auto_hotplug
+# echo 1 > /sys/kernel/cluster/immediate
+# echo 1 > /sys/kernel/cluster/force
+
+Cluster switching can happen only when only core 0 is online.
+
+Active cluster can be set or toggled:
+# echo "G" > /sys/kernel/cluster/active
+# echo "LP" > /sys/kernel/cluster/active
+# echo "toggle" > /sys/kernel/cluster/active
+
+/sys/module/tegra30_clocks/parameters/detach_shared_bus
+------------------------------------------------------
+
+Enable/disable shared bus clock update.
+
+/sys/module/tegra3_emc/parameters/emc_enable
+--------------------------------------------
+
+Enable/disable EMC DFS.
+
+/sys/kernel/debug/tegra_emc/stats
+---------------------------------
+
+Contains EMC clock statistics.
+
+/sys/module/tegra3_dvfs/parameters/disable_cpu
+----------------------------------------------
+
+Enable/disable DVFS for CPU domain.
+
+/sys/module/tegra3_dvfs/parameters/disable_core
+-----------------------------------------------
+
+Enable/disable DVFS for CORE domain.
+
+/sys/kernel/debug/clock/emc/rate
+--------------------------------
+
+Get/set EMC clock rate.
+
+/sys/kernel/debug/clock/<module>/rate
+-------------------------------------
+
+/sys/kernel/debug/clock/<module>/parent
+---------------------------------------
+
+/sys/kernel/debug/clock/<module>/state
+--------------------------------------
+
+/sys/kernel/debug/clock/<module>/time_on
+----------------------------------------
+
+/sys/kernel/debug/clock/clock_tree
+----------------------------------
+
+Shows the state of the clock tree.
+
+/sys/kernel/debug/clock/dvfs
+----------------------------
+
+Contains voltage state.
+
+/sys/kernel/debug/tegra_actmon/avp/state
+----------------------------------------
+
+/sys/kernel/debug/clock/mon.avp/rate
+------------------------------------
+
+/sys/kernel/debug/clock/rails
+-----------------------------
+
+Contains the time at each voltage.
+
+/sys/devices/system/cpu/cpu0/cpufreq/stats/time_in_state
+--------------------------------------------------------
+
+Contains the time at each frequency.
diff --git a/Documentation/cgroups/cgroups.txt b/Documentation/cgroups/cgroups.txt
index 8e74980ab385..594ff17d9da4 100644
--- a/Documentation/cgroups/cgroups.txt
+++ b/Documentation/cgroups/cgroups.txt
@@ -592,6 +592,15 @@ there are not tasks in the cgroup. If pre_destroy() returns error code,
rmdir() will fail with it. From this behavior, pre_destroy() can be
called multiple times against a cgroup.
+int allow_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
+(cgroup_mutex held by caller)
+
+Called prior to moving a task into a cgroup; if the subsystem
+returns an error, this will abort the attach operation. Used
+to extend the permission checks - if all subsystems in a cgroup
+return 0, the attach will be allowed to proceed, even if the
+default permission check (root or same user) fails.
+
int can_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
(cgroup_mutex held by caller)
diff --git a/Documentation/cpu-freq/governors.txt b/Documentation/cpu-freq/governors.txt
index c7a2eb8450c2..d6ef94a95cc8 100644
--- a/Documentation/cpu-freq/governors.txt
+++ b/Documentation/cpu-freq/governors.txt
@@ -28,6 +28,7 @@ Contents:
2.3 Userspace
2.4 Ondemand
2.5 Conservative
+2.6 Interactive
3. The Governor Interface in the CPUfreq Core
@@ -191,6 +192,43 @@ governor but for the opposite direction. For example when set to its
default value of '20' it means that if the CPU usage needs to be below
20% between samples to have the frequency decreased.
+
+2.6 Interactive
+---------------
+
+The CPUfreq governor "interactive" is designed for latency-sensitive,
+interactive workloads. This governor sets the CPU speed depending on
+usage, similar to "ondemand" and "conservative" governors. However,
+the governor is more aggressive about scaling the CPU speed up in
+response to CPU-intensive activity.
+
+Sampling the CPU load every X ms can lead to under-powering the CPU
+for X ms, leading to dropped frames, stuttering UI, etc. Instead of
+sampling the cpu at a specified rate, the interactive governor will
+check whether to scale the cpu frequency up soon after coming out of
+idle. When the cpu comes out of idle, a timer is configured to fire
+within 1-2 ticks. If the cpu is very busy between exiting idle and
+when the timer fires then we assume the cpu is underpowered and ramp
+to MAX speed.
+
+If the cpu was not sufficiently busy to immediately ramp to MAX speed,
+then governor evaluates the cpu load since the last speed adjustment,
+choosing the highest value between that longer-term load or the
+short-term load since idle exit to determine the cpu speed to ramp to.
+
+The tuneable values for this governor are:
+
+min_sample_time: The minimum amount of time to spend at the current
+frequency before ramping down. This is to ensure that the governor has
+seen enough historic cpu load data to determine the appropriate
+workload. Default is 80000 uS.
+
+go_maxspeed_load: The CPU load at which to ramp to max speed. Default
+is 85.
+
+timer_rate: Sample rate for reevaluating cpu load when the system is
+not idle. Default is 30000 uS.
+
3. The Governor Interface in the CPUfreq Core
=============================================
diff --git a/Documentation/device-mapper/dm-crypt.txt b/Documentation/device-mapper/dm-crypt.txt
index 2c656ae43ba7..573459b55518 100644
--- a/Documentation/device-mapper/dm-crypt.txt
+++ b/Documentation/device-mapper/dm-crypt.txt
@@ -9,7 +9,7 @@ Parameters: <cipher> <key> <iv_offset> <device path> \
<cipher>
Encryption cipher and an optional IV generation mode.
- (In format cipher[:keycount]-chainmode-ivopts:ivmode).
+ (In format cipher-chainmode-ivopts:ivmode).
Examples:
des
aes-cbc-essiv:sha256
@@ -21,11 +21,6 @@ Parameters: <cipher> <key> <iv_offset> <device path> \
Key used for encryption. It is encoded as a hexadecimal number.
You can only use key sizes that are valid for the selected cipher.
-<keycount>
- Multi-key compatibility mode. You can define <keycount> keys and
- then sectors are encrypted according to their offsets (sector 0 uses key0;
- sector 1 uses key1 etc.). <keycount> must be a power of two.
-
<iv_offset>
The IV offset is a sector count that is added to the sector number
before creating the IV.
diff --git a/Documentation/devicetree/bindings/arm/arch_timer.txt b/Documentation/devicetree/bindings/arm/arch_timer.txt
new file mode 100644
index 000000000000..b1d4c6dcd463
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/arch_timer.txt
@@ -0,0 +1,28 @@
+* ARM architected timer
+
+ARM Cortex-A7 and Cortex-A15 have a per-core architected timer, which
+provides a per-cpu local timer.
+
+The timer is attached to a GIC to deliver its two per-processor
+interrupts (one for the secure mode, one for the non-secure mode).
+
+** Timer node properties:
+
+- compatible : Should be "arm,armv7-timer"
+
+- interrupts : Interrupt list for secure, non-secure, virtual and
+ hypervisor timers, in that order.
+
+- clock-frequency : The frequency of the main counter, in Hz. Optional.
+
+Example:
+
+ timer {
+ compatible = "arm,cortex-a15-timer",
+ "arm,armv7-timer";
+ interrupts = <1 13 0xf08>,
+ <1 14 0xf08>,
+ <1 11 0xf08>,
+ <1 10 0xf08>;
+ clock-frequency = <100000000>;
+ };
diff --git a/Documentation/devicetree/bindings/arm/tegra/emc.txt b/Documentation/devicetree/bindings/arm/tegra/emc.txt
index 09335f8eee00..f735e34932f7 100644
--- a/Documentation/devicetree/bindings/arm/tegra/emc.txt
+++ b/Documentation/devicetree/bindings/arm/tegra/emc.txt
@@ -4,14 +4,15 @@ Properties:
- name : Should be emc
- #address-cells : Should be 1
- #size-cells : Should be 0
-- compatible : Should contain "nvidia,tegra20-emc".
+- compatible : Should contain "nvidia,tegra20-emc" or "nvidia,tegra30-emc"
- reg : Offset and length of the register set for the device
- nvidia,use-ram-code : If present, the sub-nodes will be addressed
and chosen using the ramcode board selector. If omitted, only one
set of tables can be present and said tables will be used
irrespective of ram-code configuration.
-Child device nodes describe the memory settings for different configurations and clock rates.
+Child device nodes describe the memory settings for different configurations
+and clock rates.
Example:
@@ -61,6 +62,8 @@ There are two ways of specifying which tables to use:
these strappings can be read through a register in the SoC, and thus
used to select which tables to use.
+Tables for Tegra20:
+
Properties:
- name : Should be emc-table
- compatible : Should contain "nvidia,tegra20-emc-table".
@@ -98,3 +101,74 @@ Properties:
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 >;
};
+
+Tables for Tegra30:
+
+Properties:
+- name : Should be emc-table
+- compatible : Should contain "nvidia,tegra30-emc-table".
+- reg : either an opaque enumerator to tell different tables apart, or
+ the valid frequency for which the table should be used (in kHz).
+- nvidia,revision : SDRAM revision
+- clock-frequency : the clock frequency for the EMC at which this
+ table should be used (in kHz).
+- nvidia,emc-registers : a word array of EMC registers to be programmed
+ for operation at the 'clock-frequency' setting.
+ The order and contents of the registers are:
+ RC, RFC, RAS, RP, R2W, W2R, R2P, W2P, RD_RCD, WR_RCD, RRD, REXT,
+ WEXT, WDV, QUSE, QRST, QSAFE, RDV, REFRESH, BURST_REFRESH_NUM,
+ PRE_REFRESH_REQ_CNT, PDEX2WR, PDEX2RD, PCHG2PDEN, ACT2PDEN,
+ AR2PDEN, RW2PDEN, TXSR, TXSRDLL, TCKE, TFAW, TRPAB,TCLKSTABLE,
+ TCLKSTOP, TREFBW, QUSE_EXTRA, FBIO_CFG6, ODT_WRITE, ODT_READ,
+ FBIO_CFG5, CFG_DIG_DLL, CFG_DIG_DLL_PERIOD,
+ DLL_XFORM_DQS0, DLL_XFORM_DQS1, DLL_XFORM_DQS2, DLL_XFORM_DQS3,
+ DLL_XFORM_DQS4, DLL_XFORM_DQS5, DLL_XFORM_DQS6, DLL_XFORM_DQS7,
+ DLL_XFORM_QUSE0, DLL_XFORM_QUSE1, DLL_XFORM_QUSE2, DLL_XFORM_QUSE3,
+ DLL_XFORM_QUSE4, DLL_XFORM_QUSE5, DLL_XFORM_QUSE6, DLL_XFORM_QUSE7,
+ DLI_TRIM_TXDQS0, DLI_TRIM_TXDQS1, DLI_TRIM_TXDQS2, DLI_TRIM_TXDQS3,
+ DLI_TRIM_TXDQS4, DLI_TRIM_TXDQS5, DLI_TRIM_TXDQS6, DLI_TRIM_TXDQS7,
+ DLL_XFORM_DQ0, DLL_XFORM_DQ1, DLL_XFORM_DQ2, DLL_XFORM_DQ3,
+ DLL_XFORM_DQ1, DLL_XFORM_DQ2, DLL_XFORM_DQ3, XM2CMDPADCTRL, XM2DQSPADCTRL2,
+ XM2DQPADCTRL2, XM2CLKPADCTRL, XM2COMPPADCTRL, XM2VTTGENPADCTRL,
+ XM2VTTGENPADCTRL2, XM2QUSEPADCTRL, XM2DQSPADCTRL3, CTT_TERM_CTRL,
+ ZCAL_INTERVAL, ZCAL_WAIT_CNT, MRS_WAIT_CNT, AUTO_CAL_CONFIG, CTT,
+ CTT_DURATION, DYN_SELF_REF_CONTROL, EMEM_ARB_CFG, EMEM_ARB_OUTSTANDING_REQ,
+ EMEM_ARB_TIMING_RCD, EMEM_ARB_TIMING_RP, EMEM_ARB_TIMING_RC,
+ EMEM_ARB_TIMING_RAS, EMEM_ARB_TIMING_FAW, EMEM_ARB_TIMING_RRD,
+ EMEM_ARB_TIMING_RAP2PRE, EMEM_ARB_TIMING_WAP2PRE, EMEM_ARB_TIMING_R2R,
+ EMEM_ARB_TIMING_W2W, EMEM_ARB_TIMING_R2W, EMEM_ARB_TIMING_W2R,
+ EMEM_ARB_DA_TURNS, EMEM_ARB_DA_COVERS, EMEM_ARB_MISC0,
+ EMEM_ARB_RING1_THROTTLE, FBIO_SPARE, CFG_RSV
+
+optional properties:
+- nvidia,emc-zcal-cnt-long : EMC_ZCAL_WAIT_CNT after clock change
+- nvidia,emc-acal-interval : EMC_AUTO_CAL_INTERVAL
+- nvidia,emc-periodic-qrst : EMC_CFG.PERIODIC_QRST
+- nvidia,emc-mode-reset : Mode Register 0
+- nvidia,emc-mode-1 : Mode Register 1
+- nvidia,emc-mode-2 : Mode Register 2
+- nvidia,emc-dsr : EMC_CFG.DYN_SELF_REF
+- nvidia,emc-min-mv : Minimum voltage
+
+ emc-table@166000 {
+ reg = <166000>;
+ compatible = "nvidia,tegra30-emc-table";
+ clock-frequency = < 166000 >;
+ nvidia,revision = <0>;
+ nvidia,emc-registers = < 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ 0 0 0 0 0 0 0 0 0 0 0 0>;
+ nvidia,emc-zcal-cnt-long = <0>;
+ nvidia,emc-acal-interval = <0>;
+ nvidia,emc-periodic-qrst = <0>;
+ nvidia,emc-mode-reset = <0>;
+ nvidia,emc-mode-1 = <0>;
+ nvidia,emc-mode-2 = <0>;
+ nvidia,emc-dsr = <0>;
+ nvidia,emc-min-mv = <0>;
+ };
diff --git a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-dvfs.txt b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-dvfs.txt
new file mode 100644
index 000000000000..0ecc7aeef42a
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra30-dvfs.txt
@@ -0,0 +1,50 @@
+NVIDIA Tegra30 DVFS tables
+
+dvfs-tables node:
+All the tables must be contained in dvfs-tables parent node. This node is just
+container for all dvfs tables, it does not have any compatible property.
+
+Tables:
+Required properties for child nodes of dvfs-tables:
+
+compatible: Must be any of
+ "nvidia,tegra30-cpu-dvfs" for CPU dvfs tables or
+ "nvidia,tegra30-cpu0-dvfs" for cpu0 dvfs tables or
+ "nvidia,tegra30-core-dvfs" for core dvfs tables.
+
+voltage-table: Voltage steps for rail. Unit for voltage value is mV.
+
+#address-cells: Should be 0.
+#size-cells: Should be 1.
+
+Frequency tables:
+
+Frequency tables are grouped using the combination of speedo-id, process-id and manual-dvfs.
+
+Required properties:
+
+reg: Can be any number but same as used in node name. Should be unique within the dvfs table.
+clock-name: Clock name for which frequencies are mentioned in table.
+frequencies: Array of frequencies. Unit for frequency is KHz.
+
+Optional properties:
+speedo-id: If not present, speedo id value will be -1.
+process-id: If not present, process id value will be -1.
+manual-dvfs: If not present, dvfs for the clocks in this frequency table is auto.
+
+Example:
+
+ dvfs-tables {
+ cpudvfs {
+ compatible = "nvidia,tegra30-cpu-dvfs";
+ voltage-table = <800 825 850 875 900 916 950 975 1000 1007 1025 1050 1075 1100 1125 1150 1175 1200 1212 1237>;
+
+ frequency-table@1 {
+ reg = <1>;
+ speedo-id = <0>;
+ process-id = <0>;
+ clock-name = "cpu_g";
+ frequencies = <1 1 684000 684000 817000 817000 817000 1026000 1102000 1102000 1149000 1187000 1225000 1282000 1300000>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra114-pinmux.txt b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra114-pinmux.txt
new file mode 100644
index 000000000000..ce427c1591b1
--- /dev/null
+++ b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra114-pinmux.txt
@@ -0,0 +1,118 @@
+NVIDIA Tegra114 pinmux controller
+
+The Tegra114 pinctrl binding is very similar to the Tegra20 and Tegra30
+pinctrl binding, as described in nvidia,tegra20-pinmux.txt and
+nvidia,tegra30-pinmux.txt. In fact, this document assumes that binding as
+a baseline, and only documents the differences between the two bindings.
+
+Required properties:
+- compatible: "nvidia,tegra114-pinmux"
+- reg: Should contain the register physical address and length for each of
+ the pad control and mux registers.
+
+Tegra114 adds the following optional properties for pin configuration subnodes:
+- nvidia,enable-input: Integer. Enable the pin's input path. 0: no, 1: yes.
+- nvidia,open-drain: Integer. Enable open drain mode. 0: no, 1: yes.
+- nvidia,lock: Integer. Lock the pin configuration against further changes
+ until reset. 0: no, 1: yes.
+- nvidia,io-reset: Integer. Reset the IO path. 0: no, 1: yes.
+- nvidia,rcv-sel: Integer. Select VIL/VIH receivers. 0: normal, 1: high.
+- nvidia,drive-type: Integer. Valid range 0...3.
+
+As with Tegra20 and Terga30, see the Tegra TRM for complete details regarding
+which groups support which functionality.
+
+Valid values for pin and group names are:
+
+ per-pin mux groups:
+
+ These all support nvidia,function, nvidia,tristate, nvidia,pull,
+ nvidia,enable-input, nvidia,lock. Some support nvidia,open-drain,
+ nvidia,io-reset and nvidia,rcv-sel.
+
+ ulpi_data0_po1, ulpi_data1_po2, ulpi_data2_po3, ulpi_data3_po4,
+ ulpi_data4_po5, ulpi_data5_po6, ulpi_data6_po7, ulpi_data7_po0,
+ ulpi_clk_py0, ulpi_dir_py1, ulpi_nxt_py2, ulpi_stp_py3, dap3_fs_pp0,
+ dap3_din_pp1, dap3_dout_pp2, dap3_sclk_pp3, pv0, pv1, sdmmc1_clk_pz0,
+ sdmmc1_cmd_pz1, sdmmc1_dat3_py4, sdmmc1_dat2_py5, sdmmc1_dat1_py6,
+ sdmmc1_dat0_py7, clk2_out_pw5, clk2_req_pcc5, hdmi_int_pn7, ddc_scl_pv4,
+ ddc_sda_pv5, uart2_rxd_pc3, uart2_txd_pc2, uart2_rts_n_pj6,
+ uart2_cts_n_pj5, uart3_txd_pw6, uart3_rxd_pw7, uart3_cts_n_pa1,
+ uart3_rts_n_pc0, pu0, pu1, pu2, pu3, pu4, pu5, pu6, gen1_i2c_sda_pc5,
+ gen1_i2c_scl_pc4, dap4_fs_pp4, dap4_din_pp5, dap4_dout_pp6, dap4_sclk_pp7,
+ clk3_out_pee0, clk3_req_pee1, gmi_wp_n_pc7, gmi_iordy_pi5, gmi_wait_pi7,
+ gmi_adv_n_pk0, gmi_clk_pk1, gmi_cs0_n_pj0, gmi_cs1_n_pj2, gmi_cs2_n_pk3,
+ gmi_cs3_n_pk4, gmi_cs4_n_pk2, gmi_cs6_n_pi3, gmi_cs7_n_pi6, gmi_ad0_pg0,
+ gmi_ad1_pg1, gmi_ad2_pg2, gmi_ad3_pg3, gmi_ad4_pg4, gmi_ad5_pg5,
+ gmi_ad6_pg6, gmi_ad7_pg7, gmi_ad8_ph0, gmi_ad9_ph1, gmi_ad10_ph2,
+ gmi_ad11_ph3, gmi_ad12_ph4, gmi_ad13_ph5, gmi_ad14_ph6, gmi_ad15_ph7,
+ gmi_a16_pj7, gmi_a17_pb0, gmi_a18_pb1, gmi_a19_pk7, gmi_wr_n_pi0,
+ gmi_oe_n_pi1, gmi_dqs_p_pj3, gmi_rst_n_pi4, gen2_i2c_scl_pt5,
+ gen2_i2c_sda_pt6, sdmmc4_clk_pcc4, sdmmc4_cmd_pt7, sdmmc4_dat0_paa0,
+ sdmmc4_dat1_paa1, sdmmc4_dat2_paa2, sdmmc4_dat3_paa3, sdmmc4_dat4_paa4,
+ sdmmc4_dat5_paa5, sdmmc4_dat6_paa6, sdmmc4_dat7_paa7, cam_mclk_pcc0,
+ pcc1, pbb0, cam_i2c_scl_pbb1, cam_i2c_sda_pbb2, pbb3, pbb4, pbb5, pbb6,
+ pbb7, pcc2, pwr_i2c_scl_pz6, pwr_i2c_sda_pz7, kb_row0_pr0, kb_row1_pr1,
+ kb_row2_pr2, kb_row3_pr3, kb_row4_pr4, kb_row5_pr5, kb_row6_pr6,
+ kb_row7_pr7, kb_row8_ps0, kb_row9_ps1, kb_row10_ps2, kb_col0_pq0,
+ kb_col1_pq1, kb_col2_pq2, kb_col3_pq3, kb_col4_pq4, kb_col5_pq5,
+ kb_col6_pq6, kb_col7_pq7, clk_32k_out_pa0, sys_clk_req_pz5, core_pwr_req,
+ cpu_pwr_req, pwr_int_n, owr, dap1_fs_pn0, dap1_din_pn1, dap1_dout_pn2,
+ dap1_sclk_pn3, clk1_req_pee2, clk1_out_pw4, spdif_in_pk6, spdif_out_pk5,
+ dap2_fs_pa2, dap2_din_pa4, dap2_dout_pa5, dap2_sclk_pa3, dvfs_pwm_px0,
+ gpio_x1_aud_px1, gpio_x3_aud_px3, dvfs_clk_px2, gpio_x4_aud_px4,
+ gpio_x5_aud_px5, gpio_x6_aud_px6, gpio_x7_aud_px7, sdmmc3_clk_pa6,
+ sdmmc3_cmd_pa7, sdmmc3_dat0_pb7, sdmmc3_dat1_pb6, sdmmc3_dat2_pb5,
+ sdmmc3_dat3_pb4, hdmi_cec_pee3, sdmmc1_wp_n_pv3, sdmmc3_cd_n_pv2,
+ gpio_w2_aud_pw2, gpio_w3_aud_pw3, usb_vbus_en0_pn4, usb_vbus_en1_pn5,
+ sdmmc3_clk_lb_in_pee5, sdmmc3_clk_lb_out_pee4, reset_out_n.
+
+ drive groups:
+
+ These all support nvidia,pull-down-strength, nvidia,pull-up-strength,
+ nvidia,slew-rate-rising, nvidia,slew-rate-falling. Most but not all
+ support nvidia,high-speed-mode, nvidia,schmitt, nvidia,low-power-mode
+ and nvidia,drive-type.
+
+ ao1, ao2, at1, at2, at3, at4, at5, cdev1, cdev2, dap1, dap2, dap3, dap4,
+ dbg, sdio3, spi, uaa, uab, uart2, uart3, sdio1, ddc, gma, gme, gmf, gmg,
+ gmh, owr, uda.
+
+Example:
+
+ pinmux: pinmux {
+ compatible = "nvidia,tegra11x-pinmux";
+ reg = <0x70000868 0xd4 /* Pad control registers */
+ 0x70003000 0x3e4>; /* Mux registers */
+ };
+
+Example board file extract:
+
+ pinctrl {
+ sdmmc4_default: pinmux {
+ sdmmc4_clk_pcc4 {
+ nvidia,pins = "sdmmc4_clk_pcc4",
+ nvidia,function = "sdmmc4";
+ nvidia,pull = <0>;
+ nvidia,tristate = <0>;
+ };
+ sdmmc4_dat0_paa0 {
+ nvidia,pins = "sdmmc4_dat0_paa0",
+ "sdmmc4_dat1_paa1",
+ "sdmmc4_dat2_paa2",
+ "sdmmc4_dat3_paa3",
+ "sdmmc4_dat4_paa4",
+ "sdmmc4_dat5_paa5",
+ "sdmmc4_dat6_paa6",
+ "sdmmc4_dat7_paa7";
+ nvidia,function = "sdmmc4";
+ nvidia,pull = <2>;
+ nvidia,tristate = <0>;
+ };
+ };
+ };
+
+ sdhci@78000400 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&sdmmc4_default>;
+ };
diff --git a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
new file mode 100644
index 000000000000..683fde93c4fb
--- /dev/null
+++ b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
@@ -0,0 +1,132 @@
+NVIDIA Tegra20 pinmux controller
+
+Required properties:
+- compatible: "nvidia,tegra20-pinmux"
+- reg: Should contain the register physical address and length for each of
+ the tri-state, mux, pull-up/down, and pad control register sets.
+
+Please refer to pinctrl-bindings.txt in this directory for details of the
+common pinctrl bindings used by client devices, including the meaning of the
+phrase "pin configuration node".
+
+Tegra's pin configuration nodes act as a container for an abitrary number of
+subnodes. Each of these subnodes represents some desired configuration for a
+pin, a group, or a list of pins or groups. This configuration can include the
+mux function to select on those pin(s)/group(s), and various pin configuration
+parameters, such as pull-up, tristate, drive strength, etc.
+
+The name of each subnode is not important; all subnodes should be enumerated
+and processed purely based on their content.
+
+Each subnode only affects those parameters that are explicitly listed. In
+other words, a subnode that lists a mux function but no pin configuration
+parameters implies no information about any pin configuration parameters.
+Similarly, a pin subnode that describes a pullup parameter implies no
+information about e.g. the mux function or tristate parameter. For this
+reason, even seemingly boolean values are actually tristates in this binding:
+unspecified, off, or on. Unspecified is represented as an absent property,
+and off/on are represented as integer values 0 and 1.
+
+Required subnode-properties:
+- nvidia,pins : An array of strings. Each string contains the name of a pin or
+ group. Valid values for these names are listed below.
+
+Optional subnode-properties:
+- nvidia,function: A string containing the name of the function to mux to the
+ pin or group. Valid values for function names are listed below. See the Tegra
+ TRM to determine which are valid for each pin or group.
+- nvidia,pull: Integer, representing the pull-down/up to apply to the pin.
+ 0: none, 1: down, 2: up.
+- nvidia,tristate: Integer.
+ 0: drive, 1: tristate.
+- nvidia,high-speed-mode: Integer. Enable high speed mode the pins.
+ 0: no, 1: yes.
+- nvidia,schmitt: Integer. Enables Schmitt Trigger on the input.
+ 0: no, 1: yes.
+- nvidia,low-power-mode: Integer. Valid values 0-3. 0 is least power, 3 is
+ most power. Controls the drive power or current. See "Low Power Mode"
+ or "LPMD1" and "LPMD0" in the Tegra TRM.
+- nvidia,pull-down-strength: Integer. Controls drive strength. 0 is weakest.
+ The range of valid values depends on the pingroup. See "CAL_DRVDN" in the
+ Tegra TRM.
+- nvidia,pull-up-strength: Integer. Controls drive strength. 0 is weakest.
+ The range of valid values depends on the pingroup. See "CAL_DRVUP" in the
+ Tegra TRM.
+- nvidia,slew-rate-rising: Integer. Controls rising signal slew rate. 0 is
+ fastest. The range of valid values depends on the pingroup. See
+ "DRVDN_SLWR" in the Tegra TRM.
+- nvidia,slew-rate-falling: Integer. Controls falling signal slew rate. 0 is
+ fastest. The range of valid values depends on the pingroup. See
+ "DRVUP_SLWF" in the Tegra TRM.
+
+Note that many of these properties are only valid for certain specific pins
+or groups. See the Tegra TRM and various pinmux spreadsheets for complete
+details regarding which groups support which functionality. The Linux pinctrl
+driver may also be a useful reference, since it consolidates, disambiguates,
+and corrects data from all those sources.
+
+Valid values for pin and group names are:
+
+ mux groups:
+
+ These all support nvidia,function, nvidia,tristate, and many support
+ nvidia,pull.
+
+ ata, atb, atc, atd, ate, cdev1, cdev2, crtp, csus, dap1, dap2, dap3, dap4,
+ ddc, dta, dtb, dtc, dtd, dte, dtf, gma, gmb, gmc, gmd, gme, gpu, gpu7,
+ gpv, hdint, i2cp, irrx, irtx, kbca, kbcb, kbcc, kbcd, kbce, kbcf, lcsn,
+ ld0, ld1, ld2, ld3, ld4, ld5, ld6, ld7, ld8, ld9, ld10, ld11, ld12, ld13,
+ ld14, ld15, ld16, ld17, ldc, ldi, lhp0, lhp1, lhp2, lhs, lm0, lm1, lpp,
+ lpw0, lpw1, lpw2, lsc0, lsc1, lsck, lsda, lsdi, lspi, lvp0, lvp1, lvs,
+ owc, pmc, pta, rm, sdb, sdc, sdd, sdio1, slxa, slxc, slxd, slxk, spdi,
+ spdo, spia, spib, spic, spid, spie, spif, spig, spih, uaa, uab, uac, uad,
+ uca, ucb, uda.
+
+ tristate groups:
+
+ These only support nvidia,pull.
+
+ ck32, ddrc, pmca, pmcb, pmcc, pmcd, pmce, xm2c, xm2d, ls, lc, ld17_0,
+ ld19_18, ld21_20, ld23_22.
+
+ drive groups:
+
+ With some exceptions, these support nvidia,high-speed-mode,
+ nvidia,schmitt, nvidia,low-power-mode, nvidia,pull-down-strength,
+ nvidia,pull-up-strength, nvidia,slew-rate-rising, nvidia,slew-rate-falling.
+
+ drive_ao1, drive_ao2, drive_at1, drive_at2, drive_cdev1, drive_cdev2,
+ drive_csus, drive_dap1, drive_dap2, drive_dap3, drive_dap4, drive_dbg,
+ drive_lcd1, drive_lcd2, drive_sdmmc2, drive_sdmmc3, drive_spi, drive_uaa,
+ drive_uab, drive_uart2, drive_uart3, drive_vi1, drive_vi2, drive_xm2a,
+ drive_xm2c, drive_xm2d, drive_xm2clk, drive_sdio1, drive_crt, drive_ddc,
+ drive_gma, drive_gmb, drive_gmc, drive_gmd, drive_gme, drive_owr,
+ drive_uda.
+
+Example:
+
+ pinctrl@70000000 {
+ compatible = "nvidia,tegra20-pinmux";
+ reg = < 0x70000014 0x10 /* Tri-state registers */
+ 0x70000080 0x20 /* Mux registers */
+ 0x700000a0 0x14 /* Pull-up/down registers */
+ 0x70000868 0xa8 >; /* Pad control registers */
+ };
+
+Example board file extract:
+
+ pinctrl@70000000 {
+ sdio4_default: sdio4_default {
+ atb {
+ nvidia,pins = "atb", "gma", "gme";
+ nvidia,function = "sdio4";
+ nvidia,pull = <0>;
+ nvidia,tristate = <0>;
+ };
+ };
+ };
+
+ sdhci@c8000600 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&sdio4_default>;
+ };
diff --git a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt
new file mode 100644
index 000000000000..6f426ed7009e
--- /dev/null
+++ b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra30-pinmux.txt
@@ -0,0 +1,132 @@
+NVIDIA Tegra30 pinmux controller
+
+The Tegra30 pinctrl binding is very similar to the Tegra20 pinctrl binding,
+as described in nvidia,tegra20-pinmux.txt. In fact, this document assumes
+that binding as a baseline, and only documents the differences between the
+two bindings.
+
+Required properties:
+- compatible: "nvidia,tegra30-pinmux"
+- reg: Should contain the register physical address and length for each of
+ the pad control and mux registers.
+
+Tegra30 adds the following optional properties for pin configuration subnodes:
+- nvidia,enable-input: Integer. Enable the pin's input path. 0: no, 1: yes.
+- nvidia,open-drain: Integer. Enable open drain mode. 0: no, 1: yes.
+- nvidia,lock: Integer. Lock the pin configuration against further changes
+ until reset. 0: no, 1: yes.
+- nvidia,io-reset: Integer. Reset the IO path. 0: no, 1: yes.
+
+As with Tegra20, see the Tegra TRM for complete details regarding which groups
+support which functionality.
+
+Valid values for pin and group names are:
+
+ per-pin mux groups:
+
+ These all support nvidia,function, nvidia,tristate, nvidia,pull,
+ nvidia,enable-input, nvidia,lock. Some support nvidia,open-drain,
+ nvidia,io-reset.
+
+ clk_32k_out_pa0, uart3_cts_n_pa1, dap2_fs_pa2, dap2_sclk_pa3,
+ dap2_din_pa4, dap2_dout_pa5, sdmmc3_clk_pa6, sdmmc3_cmd_pa7, gmi_a17_pb0,
+ gmi_a18_pb1, lcd_pwr0_pb2, lcd_pclk_pb3, sdmmc3_dat3_pb4, sdmmc3_dat2_pb5,
+ sdmmc3_dat1_pb6, sdmmc3_dat0_pb7, uart3_rts_n_pc0, lcd_pwr1_pc1,
+ uart2_txd_pc2, uart2_rxd_pc3, gen1_i2c_scl_pc4, gen1_i2c_sda_pc5,
+ lcd_pwr2_pc6, gmi_wp_n_pc7, sdmmc3_dat5_pd0, sdmmc3_dat4_pd1, lcd_dc1_pd2,
+ sdmmc3_dat6_pd3, sdmmc3_dat7_pd4, vi_d1_pd5, vi_vsync_pd6, vi_hsync_pd7,
+ lcd_d0_pe0, lcd_d1_pe1, lcd_d2_pe2, lcd_d3_pe3, lcd_d4_pe4, lcd_d5_pe5,
+ lcd_d6_pe6, lcd_d7_pe7, lcd_d8_pf0, lcd_d9_pf1, lcd_d10_pf2, lcd_d11_pf3,
+ lcd_d12_pf4, lcd_d13_pf5, lcd_d14_pf6, lcd_d15_pf7, gmi_ad0_pg0,
+ gmi_ad1_pg1, gmi_ad2_pg2, gmi_ad3_pg3, gmi_ad4_pg4, gmi_ad5_pg5,
+ gmi_ad6_pg6, gmi_ad7_pg7, gmi_ad8_ph0, gmi_ad9_ph1, gmi_ad10_ph2,
+ gmi_ad11_ph3, gmi_ad12_ph4, gmi_ad13_ph5, gmi_ad14_ph6, gmi_ad15_ph7,
+ gmi_wr_n_pi0, gmi_oe_n_pi1, gmi_dqs_pi2, gmi_cs6_n_pi3, gmi_rst_n_pi4,
+ gmi_iordy_pi5, gmi_cs7_n_pi6, gmi_wait_pi7, gmi_cs0_n_pj0, lcd_de_pj1,
+ gmi_cs1_n_pj2, lcd_hsync_pj3, lcd_vsync_pj4, uart2_cts_n_pj5,
+ uart2_rts_n_pj6, gmi_a16_pj7, gmi_adv_n_pk0, gmi_clk_pk1, gmi_cs4_n_pk2,
+ gmi_cs2_n_pk3, gmi_cs3_n_pk4, spdif_out_pk5, spdif_in_pk6, gmi_a19_pk7,
+ vi_d2_pl0, vi_d3_pl1, vi_d4_pl2, vi_d5_pl3, vi_d6_pl4, vi_d7_pl5,
+ vi_d8_pl6, vi_d9_pl7, lcd_d16_pm0, lcd_d17_pm1, lcd_d18_pm2, lcd_d19_pm3,
+ lcd_d20_pm4, lcd_d21_pm5, lcd_d22_pm6, lcd_d23_pm7, dap1_fs_pn0,
+ dap1_din_pn1, dap1_dout_pn2, dap1_sclk_pn3, lcd_cs0_n_pn4, lcd_sdout_pn5,
+ lcd_dc0_pn6, hdmi_int_pn7, ulpi_data7_po0, ulpi_data0_po1, ulpi_data1_po2,
+ ulpi_data2_po3, ulpi_data3_po4, ulpi_data4_po5, ulpi_data5_po6,
+ ulpi_data6_po7, dap3_fs_pp0, dap3_din_pp1, dap3_dout_pp2, dap3_sclk_pp3,
+ dap4_fs_pp4, dap4_din_pp5, dap4_dout_pp6, dap4_sclk_pp7, kb_col0_pq0,
+ kb_col1_pq1, kb_col2_pq2, kb_col3_pq3, kb_col4_pq4, kb_col5_pq5,
+ kb_col6_pq6, kb_col7_pq7, kb_row0_pr0, kb_row1_pr1, kb_row2_pr2,
+ kb_row3_pr3, kb_row4_pr4, kb_row5_pr5, kb_row6_pr6, kb_row7_pr7,
+ kb_row8_ps0, kb_row9_ps1, kb_row10_ps2, kb_row11_ps3, kb_row12_ps4,
+ kb_row13_ps5, kb_row14_ps6, kb_row15_ps7, vi_pclk_pt0, vi_mclk_pt1,
+ vi_d10_pt2, vi_d11_pt3, vi_d0_pt4, gen2_i2c_scl_pt5, gen2_i2c_sda_pt6,
+ sdmmc4_cmd_pt7, pu0, pu1, pu2, pu3, pu4, pu5, pu6, jtag_rtck_pu7, pv0,
+ pv1, pv2, pv3, ddc_scl_pv4, ddc_sda_pv5, crt_hsync_pv6, crt_vsync_pv7,
+ lcd_cs1_n_pw0, lcd_m1_pw1, spi2_cs1_n_pw2, spi2_cs2_n_pw3, clk1_out_pw4,
+ clk2_out_pw5, uart3_txd_pw6, uart3_rxd_pw7, spi2_mosi_px0, spi2_miso_px1,
+ spi2_sck_px2, spi2_cs0_n_px3, spi1_mosi_px4, spi1_sck_px5, spi1_cs0_n_px6,
+ spi1_miso_px7, ulpi_clk_py0, ulpi_dir_py1, ulpi_nxt_py2, ulpi_stp_py3,
+ sdmmc1_dat3_py4, sdmmc1_dat2_py5, sdmmc1_dat1_py6, sdmmc1_dat0_py7,
+ sdmmc1_clk_pz0, sdmmc1_cmd_pz1, lcd_sdin_pz2, lcd_wr_n_pz3, lcd_sck_pz4,
+ sys_clk_req_pz5, pwr_i2c_scl_pz6, pwr_i2c_sda_pz7, sdmmc4_dat0_paa0,
+ sdmmc4_dat1_paa1, sdmmc4_dat2_paa2, sdmmc4_dat3_paa3, sdmmc4_dat4_paa4,
+ sdmmc4_dat5_paa5, sdmmc4_dat6_paa6, sdmmc4_dat7_paa7, pbb0,
+ cam_i2c_scl_pbb1, cam_i2c_sda_pbb2, pbb3, pbb4, pbb5, pbb6, pbb7,
+ cam_mclk_pcc0, pcc1, pcc2, sdmmc4_rst_n_pcc3, sdmmc4_clk_pcc4,
+ clk2_req_pcc5, pex_l2_rst_n_pcc6, pex_l2_clkreq_n_pcc7,
+ pex_l0_prsnt_n_pdd0, pex_l0_rst_n_pdd1, pex_l0_clkreq_n_pdd2,
+ pex_wake_n_pdd3, pex_l1_prsnt_n_pdd4, pex_l1_rst_n_pdd5,
+ pex_l1_clkreq_n_pdd6, pex_l2_prsnt_n_pdd7, clk3_out_pee0, clk3_req_pee1,
+ clk1_req_pee2, hdmi_cec_pee3, clk_32k_in, core_pwr_req, cpu_pwr_req, owr,
+ pwr_int_n.
+
+ drive groups:
+
+ These all support nvidia,pull-down-strength, nvidia,pull-up-strength,
+ nvidia,slew-rate-rising, nvidia,slew-rate-falling. Most but not all
+ support nvidia,high-speed-mode, nvidia,schmitt, nvidia,low-power-mode.
+
+ ao1, ao2, at1, at2, at3, at4, at5, cdev1, cdev2, cec, crt, csus, dap1,
+ dap2, dap3, dap4, dbg, ddc, dev3, gma, gmb, gmc, gmd, gme, gmf, gmg,
+ gmh, gpv, lcd1, lcd2, owr, sdio1, sdio2, sdio3, spi, uaa, uab, uart2,
+ uart3, uda, vi1.
+
+Example:
+
+ pinctrl@70000000 {
+ compatible = "nvidia,tegra30-pinmux";
+ reg = < 0x70000868 0xd0 /* Pad control registers */
+ 0x70003000 0x3e0 >; /* Mux registers */
+ };
+
+Example board file extract:
+
+ pinctrl@70000000 {
+ sdmmc4_default: pinmux {
+ sdmmc4_clk_pcc4 {
+ nvidia,pins = "sdmmc4_clk_pcc4",
+ "sdmmc4_rst_n_pcc3";
+ nvidia,function = "sdmmc4";
+ nvidia,pull = <0>;
+ nvidia,tristate = <0>;
+ };
+ sdmmc4_dat0_paa0 {
+ nvidia,pins = "sdmmc4_dat0_paa0",
+ "sdmmc4_dat1_paa1",
+ "sdmmc4_dat2_paa2",
+ "sdmmc4_dat3_paa3",
+ "sdmmc4_dat4_paa4",
+ "sdmmc4_dat5_paa5",
+ "sdmmc4_dat6_paa6",
+ "sdmmc4_dat7_paa7";
+ nvidia,function = "sdmmc4";
+ nvidia,pull = <2>;
+ nvidia,tristate = <0>;
+ };
+ };
+ };
+
+ sdhci@78000400 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&sdmmc4_default>;
+ };
diff --git a/Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt b/Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt
new file mode 100644
index 000000000000..c95ea8278f87
--- /dev/null
+++ b/Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt
@@ -0,0 +1,128 @@
+== Introduction ==
+
+Hardware modules that control pin multiplexing or configuration parameters
+such as pull-up/down, tri-state, drive-strength etc are designated as pin
+controllers. Each pin controller must be represented as a node in device tree,
+just like any other hardware module.
+
+Hardware modules whose signals are affected by pin configuration are
+designated client devices. Again, each client device must be represented as a
+node in device tree, just like any other hardware module.
+
+For a client device to operate correctly, certain pin controllers must
+set up certain specific pin configurations. Some client devices need a
+single static pin configuration, e.g. set up during initialization. Others
+need to reconfigure pins at run-time, for example to tri-state pins when the
+device is inactive. Hence, each client device can define a set of named
+states. The number and names of those states is defined by the client device's
+own binding.
+
+The common pinctrl bindings defined in this file provide an infrastructure
+for client device device tree nodes to map those state names to the pin
+configuration used by those states.
+
+Note that pin controllers themselves may also be client devices of themselves.
+For example, a pin controller may set up its own "active" state when the
+driver loads. This would allow representing a board's static pin configuration
+in a single place, rather than splitting it across multiple client device
+nodes. The decision to do this or not somewhat rests with the author of
+individual board device tree files, and any requirements imposed by the
+bindings for the individual client devices in use by that board, i.e. whether
+they require certain specific named states for dynamic pin configuration.
+
+== Pinctrl client devices ==
+
+For each client device individually, every pin state is assigned an integer
+ID. These numbers start at 0, and are contiguous. For each state ID, a unique
+property exists to define the pin configuration. Each state may also be
+assigned a name. When names are used, another property exists to map from
+those names to the integer IDs.
+
+Each client device's own binding determines the set of states the must be
+defined in its device tree node, and whether to define the set of state
+IDs that must be provided, or whether to define the set of state names that
+must be provided.
+
+Required properties:
+pinctrl-0: List of phandles, each pointing at a pin configuration
+ node. These referenced pin configuration nodes must be child
+ nodes of the pin controller that they configure. Multiple
+ entries may exist in this list so that multiple pin
+ controllers may be configured, or so that a state may be built
+ from multiple nodes for a single pin controller, each
+ contributing part of the overall configuration. See the next
+ section of this document for details of the format of these
+ pin configuration nodes.
+
+ In some cases, it may be useful to define a state, but for it
+ to be empty. This may be required when a common IP block is
+ used in an SoC either without a pin controller, or where the
+ pin controller does not affect the HW module in question. If
+ the binding for that IP block requires certain pin states to
+ exist, they must still be defined, but may be left empty.
+
+Optional properties:
+pinctrl-1: List of phandles, each pointing at a pin configuration
+ node within a pin controller.
+...
+pinctrl-n: List of phandles, each pointing at a pin configuration
+ node within a pin controller.
+pinctrl-names: The list of names to assign states. List entry 0 defines the
+ name for integer state ID 0, list entry 1 for state ID 1, and
+ so on.
+
+For example:
+
+ /* For a client device requiring named states */
+ device {
+ pinctrl-names = "active", "idle";
+ pinctrl-0 = <&state_0_node_a>;
+ pinctrl-1 = <&state_1_node_a &state_1_node_b>;
+ };
+
+ /* For the same device if using state IDs */
+ device {
+ pinctrl-0 = <&state_0_node_a>;
+ pinctrl-1 = <&state_1_node_a &state_1_node_b>;
+ };
+
+ /*
+ * For an IP block whose binding supports pin configuration,
+ * but in use on an SoC that doesn't have any pin control hardware
+ */
+ device {
+ pinctrl-names = "active", "idle";
+ pinctrl-0 = <>;
+ pinctrl-1 = <>;
+ };
+
+== Pin controller devices ==
+
+Pin controller devices should contain the pin configuration nodes that client
+devices reference.
+
+For example:
+
+ pincontroller {
+ ... /* Standard DT properties for the device itself elided */
+
+ state_0_node_a {
+ ...
+ };
+ state_1_node_a {
+ ...
+ };
+ state_1_node_b {
+ ...
+ };
+ }
+
+The contents of each of those pin configuration child nodes is defined
+entirely by the binding for the individual pin controller device. There
+exists no common standard for this content.
+
+The pin configuration nodes need not be direct children of the pin controller
+device; they may be grandchildren, for example. Whether this is legal, and
+whether there is any interaction between the child and intermediate parent
+nodes, is again defined entirely by the binding for the individual pin
+controller device.
diff --git a/Documentation/devicetree/bindings/pinmux/pinmux_nvidia.txt b/Documentation/devicetree/bindings/pinmux/pinmux_nvidia.txt
deleted file mode 100644
index 36f82dbdd14d..000000000000
--- a/Documentation/devicetree/bindings/pinmux/pinmux_nvidia.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-NVIDIA Tegra 2 pinmux controller
-
-Required properties:
-- compatible : "nvidia,tegra20-pinmux"
-
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index 3bbd5c51605a..5ff4d2b84f72 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -29,13 +29,6 @@ The buffer-user
in memory, mapped into its own address space, so it can access the same area
of memory.
-*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
-For this first version, A buffer shared using the dma_buf sharing API:
-- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
- this framework.
-- with this new iteration of the dma-buf api cpu access from the kernel has been
- enable, see below for the details.
-
dma-buf operations for device dma only
--------------------------------------
@@ -313,6 +306,83 @@ Access to a dma_buf from the kernel context involves three steps:
enum dma_data_direction dir);
+Direct Userspace Access/mmap Support
+------------------------------------
+
+Being able to mmap an export dma-buf buffer object has 2 main use-cases:
+- CPU fallback processing in a pipeline and
+- supporting existing mmap interfaces in importers.
+
+1. CPU fallback processing in a pipeline
+
+ In many processing pipelines it is sometimes required that the cpu can access
+ the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid
+ the need to handle this specially in userspace frameworks for buffer sharing
+ it's ideal if the dma_buf fd itself can be used to access the backing storage
+ from userspace using mmap.
+
+ Furthermore Android's ION framework already supports this (and is otherwise
+ rather similar to dma-buf from a userspace consumer side with using fds as
+ handles, too). So it's beneficial to support this in a similar fashion on
+ dma-buf to have a good transition path for existing Android userspace.
+
+ No special interfaces, userspace simply calls mmap on the dma-buf fd.
+
+2. Supporting existing mmap interfaces in exporters
+
+ Similar to the motivation for kernel cpu access it is again important that
+ the userspace code of a given importing subsystem can use the same interfaces
+ with a imported dma-buf buffer object as with a native buffer object. This is
+ especially important for drm where the userspace part of contemporary OpenGL,
+ X, and other drivers is huge, and reworking them to use a different way to
+ mmap a buffer rather invasive.
+
+ The assumption in the current dma-buf interfaces is that redirecting the
+ initial mmap is all that's needed. A survey of some of the existing
+ subsystems shows that no driver seems to do any nefarious thing like syncing
+ up with outstanding asynchronous processing on the device or allocating
+ special resources at fault time. So hopefully this is good enough, since
+ adding interfaces to intercept pagefaults and allow pte shootdowns would
+ increase the complexity quite a bit.
+
+ Interface:
+ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
+ unsigned long);
+
+ If the importing subsystem simply provides a special-purpose mmap call to set
+ up a mapping in userspace, calling do_mmap with dma_buf->file will equally
+ achieve that for a dma-buf object.
+
+3. Implementation notes for exporters
+
+ Because dma-buf buffers have invariant size over their lifetime, the dma-buf
+ core checks whether a vma is too large and rejects such mappings. The
+ exporter hence does not need to duplicate this check.
+
+ Because existing importing subsystems might presume coherent mappings for
+ userspace, the exporter needs to set up a coherent mapping. If that's not
+ possible, it needs to fake coherency by manually shooting down ptes when
+ leaving the cpu domain and flushing caches at fault time. Note that all the
+ dma_buf files share the same anon inode, hence the exporter needs to replace
+ the dma_buf file stored in vma->vm_file with it's own if pte shootdown is
+ requred. This is because the kernel uses the underlying inode's address_space
+ for vma tracking (and hence pte tracking at shootdown time with
+ unmap_mapping_range).
+
+ If the above shootdown dance turns out to be too expensive in certain
+ scenarios, we can extend dma-buf with a more explicit cache tracking scheme
+ for userspace mappings. But the current assumption is that using mmap is
+ always a slower path, so some inefficiencies should be acceptable.
+
+ Exporters that shoot down mappings (for any reasons) shall not do any
+ synchronization at fault time with outstanding device operations.
+ Synchronization is an orthogonal issue to sharing the backing storage of a
+ buffer and hence should not be handled by dma-buf itself. This is explictly
+ mentioned here because many people seem to want something like this, but if
+ different exporters handle this differently, buffer sharing can fail in
+ interesting ways depending upong the exporter (if userspace starts depending
+ upon this implicit synchronization).
+
Miscellaneous notes
-------------------
@@ -336,6 +406,20 @@ Miscellaneous notes
the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
+- If an exporter needs to manually flush caches and hence needs to fake
+ coherency for mmap support, it needs to be able to zap all the ptes pointing
+ at the backing storage. Now linux mm needs a struct address_space associated
+ with the struct file stored in vma->vm_file to do that with the function
+ unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd
+ with the anon_file struct file, i.e. all dma_bufs share the same file.
+
+ Hence exporters need to setup their own file (and address_space) association
+ by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap
+ callback. In the specific case of a gem driver the exporter could use the
+ shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then
+ zap ptes by unmapping the corresponding range of the struct address_space
+ associated with their own file.
+
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h
diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index 2a596a4fc23e..ef4fa7b423d2 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -276,3 +276,7 @@ REGULATOR
devm_regulator_get()
devm_regulator_put()
devm_regulator_bulk_get()
+
+PINCTRL
+ devm_pinctrl_get()
+ devm_pinctrl_put()
diff --git a/Documentation/edp/debugfs b/Documentation/edp/debugfs
new file mode 100644
index 000000000000..654eb259c512
--- /dev/null
+++ b/Documentation/edp/debugfs
@@ -0,0 +1,36 @@
+
+EDP DEBUGFS
+
+1. Introduction
+
+EDP debugfs root is at /sys/kernel/debug/edp. Manager and client objects
+appear as subfolders under the root, forming a tree structure similar to
+the EDP sysfs entries.
+
+Following sections describe the debufs attributes. Unless stated
+otherwise, all attributes have RW permissions.
+
+2. EDP manager
+
+ [1] cap: Peak current capacity - reading will return the present
+ value and writting will set a new cap. Note that the cap can
+ not be set lower than the sum of E0 currents of all clients.
+ Lowering the cap might result in throttling of clients.
+
+ [2] status (read-only): Gives a snap shot of the manager and its
+ budget distribution.
+
+3. EDP clients
+
+Following is the list of common client attributes. The client driver may
+add additional device specfic attributes under the same folder.
+
+ [1] current: Force a certain client's E-state. Write the state
+ index to force the state. Read will return the state value.
+ The request is processed only if it can be handled fully
+ (that is, the client will not be assigned a lesser E-state).
+ If the state can not be changed due to insufficient budget,
+ the operation fails. Note that this request may be
+ overriden by other requests. To prevent this from happening,
+ choose the debug policy governor (see the EDP governor
+ documentation).
diff --git a/Documentation/edp/design b/Documentation/edp/design
new file mode 100644
index 000000000000..67090623188c
--- /dev/null
+++ b/Documentation/edp/design
@@ -0,0 +1,155 @@
+
+SYSTEM EDP CAPPING DESIGN
+
+1. Introduction
+
+This document uses a glossary of terms to explain the design of System
+EDP capping.
+
+2. System EDP manager
+
+The central piece of software which dynamically allocates
+current-sourcing capacity to EDP client drivers for use by their
+devices. A system may have more than one manager. Managers are
+distinguished by their unique names.
+
+3. EDP client driver
+
+The device driver associated with one particular power consuming device.
+EDP client drivers register with the System EDP manager to monitor and
+manage the current consumption of their associated device. A client can
+be registered with only one manager at any given time.
+
+4. E-state
+
+Electrical states which are defined per EDP client and numbered {...
+E-2, E-1, E0, E1, E2...}. Each E-state for a given driver indicates a
+particular maximum current consumption.
+
+ [*] Higher E-state: an E-state closer to E-infinity. E-1 is
+ higher than E0, E1 is higher than E2 etc.
+ [*] Lower E-state: an E-state closer to Einfinity. E-1 is lower
+ than E-2, E1 is lower than E0 etc.
+ [*] Positive E-states: E0, E1, E2...
+ [*] Negative E-state: ...E-3, E-2, E-1.
+ [*] E0: the system EDP manager guarantees that it can provide
+ E0 simultaneously for all devices.
+
+In practice, E-states are defined as an array of maximum current
+consumption for each state and are identified by their offset into this
+array. The states should be sorted in descending order (highest E-state
+appearing first).
+
+E0 for each client must be specified explicitly by providing its id
+while registering the client with a manager. Rest of the E-states are
+determined according to their relative position to E0. For example, E-1
+is the state at e0_index - 1, E2 is the state at e0_index + 2 etc.
+
+5. EDP client registration
+
+An EDP client calls into the EDP manager (roughly once per boot) to
+register itself as a client. During registration, the EDP client
+provides its list of E-states to the System EDP manager. If a client
+attempts to register with an intolerably high E0 current (i.e. a current
+which pushes the sum of all E0 currents too high), the EDP manager will
+raise a fatal error.
+
+6. E-state request
+
+An EDP client calls into the EDP manager (issues an E-state request)
+BEFORE going to a higher E-state and AFTER going to a lower E-state. The
+EDP manager will:
+
+ [*] always approve requests to dgo to a lower E-state
+ [*] always approve requests to go to a non-negative E-state and
+ [*] either approve or reject a request to go to a higher
+ negative E-state.
+
+When the EDP manager rejects an E-state request, it returns a lower
+E-state to the client. The client then transitions to that E-state
+without needing to make a new request.
+
+7. Throttling
+
+A client is said to being throttled when its manager demands it to
+transition to a lower E-state in order to meet requests from other
+clients. A client is never asked to transition beyond E0 which means
+that throttling is done only to those clients that are running at a
+negative E-state. The EDP manager blocks until the client finishes
+transitioning to the lower E-state.
+
+8. Callbacks
+
+An EDP client may provide the following callbacks which are invoked by
+the manager at various stages.
+
+ [*] throttle: invoked when the client is being throttled;
+ mandatory for those clients that support negative E-states
+ [*] promotion notification (optional): to inform the client
+ that a previously rejected request is granted now.
+ [*] loan update notification: to inform the client that a loan
+ amount is changed; mandatory for clients that are engaged
+ in a loan agreement.
+ [*] loan closure: to inform the client that a loan is now
+ closed; mandatory for clients that are engaged in a loan
+ agreement.
+
+All callbacks are synchronous which means that the total time for an
+operation is affected by client processing. Therefore, it is important
+to reschedule any non-critical time consuming steps on a different
+context.
+
+IMPORTANT: Callbacks are invoked by the EDP manager while holding its
+lock. Thefore, clients should never call into the EDP framework from
+the callback path. Not doing so shall result in a deadlock.
+
+9. EDP lender
+
+Some current consuming devices have side-band mechanisms which lets them
+share a current consumption budget. An EDP lender is an EDP client
+driver:
+
+ [*] whose device typically draws current less than some
+ (dynamically varying) threshold
+ [*] whose occasionally draws more than its threshold but less
+ than allowed by its current E-state
+ [*] which asserts (or whose device asserts) a side-band signal
+ prior to exceeding the threshold
+
+10. EDP loan
+
+An EDP loan is a contract allowing an EDP borrower to borrow current
+consumption budget according to the difference between an EDP lender's
+E-state and its threshold when the side-band is deasserted.
+
+11. EDP borrower
+
+An EDP borrower is an EDP client driver which:
+
+ [*] gets its base current consumption budget by setting an
+ E-state with the EDP manager
+ [*] enters into an EDP loan with an EDP lender
+ [*] borrows from the EDP lender's borrows additional current
+ budget according to the difference between an EDP lender's
+ E-state and its threshold when the side-band is deasserted.
+ [*] stops borrowing from the EDP lender's budget whenever the
+ side-band is asserted
+
+12. EDP loan API
+
+An EDP lender and an EDP borrower register their loan with the EDP
+manager via the EDP loan API. Additionally the EDP lender manages its
+threshold via the EDP loan API. The EDP manager informs the borrower
+whenever the loan size changes (due to a change in the lender's E-state
+or threshold).
+
+For example, a modem's peak transmit state might require E0 but its
+typical transmit state requires only E2. The modem driver can loan the
+difference between typical and peak to the CPU as long as the CPU stops
+borrowing when it is told to do so (the loan size becomes 0).
+
+13. Policies
+
+Policies decide how to allocate the available power budget to clients.
+These are implemented by corresponding governors and is explained in a
+separate document.
diff --git a/Documentation/edp/dynamic-edp-capping b/Documentation/edp/dynamic-edp-capping
new file mode 100644
index 000000000000..091d4122ecaa
--- /dev/null
+++ b/Documentation/edp/dynamic-edp-capping
@@ -0,0 +1,36 @@
+
+DYNAMIC EDP CAPPING IN GENERAL
+
+The goal of dynamic EDP capping is to maximize performance of a system
+without violating the peak-current capacity of that system's power
+source.
+
+Dynamic EDP Capping makes sense in systems with:
+ [*] a power source of finite peak-current capacity
+ [*] one or more controllable variables which have a known
+ effect on peak current consumption from the power source.
+ [*] One or more variables whose changes are:
+ - observable in advance and
+ - which have a known effect on peak current consumption
+ from the power source
+
+In a system with only one controllable variable, the control algorithm
+is extremely simple. When the observables change, the algorithm solves
+for the maximum permissible current associated with the controllable and
+then limits the controllable as necessary to keep its current under that
+limit.
+
+In a system with more than one controllables, the control algorithm
+needs to worry about a strategy which controls the sum of their current
+while maximizing performance. There may or may not be a provably correct
+algorithm for that. If not, the EDP capping needs to fall back on a
+heuristic-based policy for choosing how to spread the pain among the
+controllables.
+
+In practice, the selection of controllables and observables is
+debatable. The simpler the set, the lower the software &
+characterization overhead. However, the simpler the set, the less
+accuracy that the observables+controllables provide in estimating the
+peak current. The larger the worst-case estimation error, the more
+performance must be sacrificed from the controllables in order to avoid
+violating the power sources peak-current capacity.
diff --git a/Documentation/edp/governors b/Documentation/edp/governors
new file mode 100644
index 000000000000..30e12b200a00
--- /dev/null
+++ b/Documentation/edp/governors
@@ -0,0 +1,84 @@
+
+EDP GOVERNORS
+
+1. Introduction
+
+EDP governors implements the policy for current budget allocation amoung
+clients. In general, the governor decides budget allocation in the
+following situations:
+
+ [*] When a client makes an E-state request. If the request can
+ not be met with the remaining current, other clients may be
+ throttled to recover extra current which can then be granted
+ to the requester. If the request is unfarely high, a reduced
+ E-state has to be decided according to the policy.
+
+ [*] When there is an increase in the manager's remaining cap,
+ the governor will try to distribute the surplus amoung
+ clients whose requests were previously rejected or who were
+ throttled during the above step.
+
+ [] When a client has more than one borrower, the loan has to be
+ distributed.
+
+Following sections provides a short description about available
+governors.
+
+2. Priority
+
+As the name indicates, this governor implements a priority based
+allocation in which higher priority clients are given preference. When a
+budget recovery takes place, lower priority clients are throttled before
+the higher priority ones. Similarly, during a promotion cycle or during
+a loan update, higher priority clients are served first.
+
+If the request can not be satisfied by throttling lower priority
+clients, the requested E-state may be lowered at most to E0. This
+ensures that higher priority clients are throttled only to provide
+minimum guarantee E-state.
+
+3. Overage
+
+Overage governor uses a proportional allocation based on the difference
+between the current E-state level and E0 (named the 'overage'). This
+causes all clients to increase or decrease in their E-state some what
+simultaneously. Hence this is fare allocation policy and ensures that no
+client is throttled too much.
+
+4. Fair
+
+Fair governor policy is similar to overage policy, but the proportion is
+based on E0-state level of clients.
+
+5. Best Fit
+
+This policy searches for a best-fit solution where the number of
+throttles and remaining current is minimum. If the optimal solution
+includes an E-state which is less than what is requested, then that will
+be approved (subject to the general EDP rules).
+
+Since the perfect solution would involve several passes across all
+clients, a trade-off is made to approximate the optimum so that the
+algorithm complexity remains linear.
+
+6. Least Recently Requested (LRR)
+
+An arrival-queue based policy where the least recently requested client
+is throttled first.
+
+7. Most Recently Requested (MRR)
+
+Another arrival-queue based policy where the most recently requested
+client is throttled first.
+
+8. Round Robin (RR)
+
+In this policy, clients are throttled in a round-robin fashion.
+
+9. Debug
+
+When the debug policy governor is selected, the framework stops
+processing requests from clients. Further changes to the client E-states
+can only be made manually via debugfs (see the EDP debugfs
+documentation). This allows one to do manual budget allocations and
+prevent clients from overriding them.
diff --git a/Documentation/edp/howto b/Documentation/edp/howto
new file mode 100644
index 000000000000..f845eb710ea8
--- /dev/null
+++ b/Documentation/edp/howto
@@ -0,0 +1,200 @@
+
+EDP API GUIDE
+
+1. Introduction
+
+This document explains how to setup an EDP framework for a system. It is
+assumed that you have read 'dynamic-edp-capping' and 'design' before
+getting here.
+
+2. Config flags
+
+EDP framework implementation depends on the CONFIG_EDP_FRAMEWORK flag.
+When this is disabled, all the APIs either return an error code or does
+nothing.
+
+3. Include files
+
+#include <linux/edp.h>
+
+4. EDP manager
+
+The manager represents the current source with its limited capacity that
+needs to be budgetted across various client drivers. A typical example
+is the battery. As this is the basic building block of the framework, it
+is necessary to create and register the manager object before the
+clients can make any request. Following is an example:
+
+ #include <linux/edp.h>
+
+ /* Define the battery EDP manager - imax indicates the cap */
+ struct edp_manager battery_edp_manager = {
+ .name = "battery",
+ .imax = 9800
+ };
+
+ ...
+
+ /* Register the battery EDP manager */
+ static int __init board_init(void)
+ {
+ return edp_register_manager(&battery_edp_manager);
+ }
+ early_initcall(board_init);
+
+5. EDP client
+
+A client needs to be registered before it can make requests. Following
+examples show how the usual operations are performed.
+
+ Example 1:
+
+ /* E-state ids */
+ #define CPU_EDP_MAX 0
+ #define CPU_EDP_HIGH 1
+ #define CPU_EDP_NORMAL 2
+ #define CPU_EDP_LOW 3
+ #define CPU_EDP_MIN 4
+
+ /* E-state array */
+ static unsigned int cpu_edp_states[] = {
+ 7500, 6000, 3000, 2000, 1000
+ };
+
+ /* throttle callback function */
+ static void throttle_cpu(unsigned int new_state)
+ {
+ /* lower the operating point */
+ ...
+ }
+
+ /*
+ * promotion call back - a previously rejected request is now
+ * granted
+ */
+ static void promote_cpu(unsigned int new_state)
+ {
+ /* increase the operating point */
+ ...
+ }
+
+ /* loan size changed */
+ static unsigned int update_cpu_loan(unsigned int new_size,
+ struct edp_client *)
+ {
+ /* increase the operating point */
+ ...
+
+ /* return the amount of loan consumed */
+ return new_size;
+ }
+
+ /* cpu client: see the include header for more info */
+ struct edp_client cpu_edp_client = {
+ .name = "cpu",
+ .states = cpu_edp_states,
+ .num_states = ARRAY_SIZE(cpu_edp_states),
+ .e0_index = CPU_EDP_NORMAL,
+ .priority = EDP_MIN_PRIO,
+ .throttle = throttle_cpu,
+ .notify_promotion = promote_cpu,
+ .notify_loan_update = update_cpu_loan
+ };
+
+ ...
+
+ static int __init platform_cpu_dvfs_init(void)
+ {
+ ...
+
+ /* register the EDP client */
+ if (edp_register_client(&battery_edp_manager,
+ &cpu_edp_client))
+ /* fatal error! */
+
+ /* request E0 - must succeed */
+ err = edp_update_client_request(&cpu_edp_client,
+ CPU_EDP_NORMAL, NULL);
+
+ /* get the modem client pointer */
+ modem_client = edp_get_client("modem");
+
+ /* borrow from modem */
+ err = edp_register_loan(modem, &cpu_edp_client);
+
+ ...
+ }
+
+ static int cpu_target(struct cpufreq_policy *policy,
+ unsigned int target_freq,
+ unsigned int relation)
+ {
+ unsigned int req;
+ unsigned int approved;
+
+ ...
+
+ /* Calculate E-state id for target_freq */
+ req = to_estateid(target_freq);
+ err = edp_update_client_request(&cpu_edp_client, req,
+ &approved);
+
+ if (approved != req)
+ /* got a lower E-state granted */
+
+ ...
+ }
+
+ Example 2:
+
+ static unsigned int modem_states[] = { ... }
+
+ /* modem client */
+ struct edp_client modem_edp_client = {
+ .name = "modem",
+ .states = modem_states,
+ .num_states = ARRAY_SIZE(num_states),
+ .e0_index = MODEM_EDP_E0,
+ .priority = EDP_MAX_PRIO + 3,
+ .max_borrowers = 1,
+ ...
+ };
+
+ static int __init modem_edp_init(void)
+ {
+ ...
+
+ /* get the manager */
+ battery_manager = edp_get_manager("battery");
+ if (!battery)
+ /* fatal error! */
+
+ err = edp_register_client(battery_manager,
+ &modem_edp_client);
+
+ ...
+ }
+
+ static void update_modem_state(int state)
+ {
+ ...
+
+ if (state == MODEM_RELAX) {
+ ...
+
+ /* calc loan threshold */
+ threshold = ...
+ err = edp_update_loan_threshold(
+ &modem_edp_client, threshold);
+ ...
+ } else if (state == MODEM_RUNNING) {
+ err = edp_update_client_request(
+ &modem_edp_client,
+ MODEM_EDP_E2H, &approved);
+
+ /* freeze the loan */
+ err = edp_update_loan_threshold(
+ &modem_edp_client, 0);
+ ...
+ }
+ }
diff --git a/Documentation/edp/sysfs b/Documentation/edp/sysfs
new file mode 100644
index 000000000000..4927086ba2ff
--- /dev/null
+++ b/Documentation/edp/sysfs
@@ -0,0 +1,41 @@
+
+EDP SYSFS
+
+1. Introduction
+
+EDP sysfs root is at /sys/power/edp. Manager and client objects appear
+as subfolders under the root forming a tree structure where clients
+appear under the managers to whom they are registered.
+
+Following sections describe the sysfs attributes. Unless explicitly
+mentioned, all files are read-only.
+
+2. EDP root level attributes
+
+ [1] governors: shows the name of available EDP policy governors.
+
+3. EDP manager
+
+Manager entries appear under the EDP root folder as subfolder with the
+same name. It contains the following attributes:
+
+ [1] cap: peak current capacity
+ [2] remaining: remaining current
+ [3] governor: current policy governor - writting to this will
+ change the governor
+
+4. EDP clients
+
+Client objects appear under their manager folders as subfolder with the
+client name. Attributes:
+
+ [1] states: E-state values
+ [2] num_states: number of E-states
+ [3] E0: E0 state value
+ [4] max_borrowers: maximum number of borrowers allowed
+ [5] priority: client's priority
+ [6] request: current request value
+ [7] current: current state's value
+ [8] threshold: loan threshold
+ [9] borrowers: number of borrowers
+ [10] number of loans.
diff --git a/Documentation/hid/uhid.txt b/Documentation/hid/uhid.txt
new file mode 100644
index 000000000000..4627c4241ece
--- /dev/null
+++ b/Documentation/hid/uhid.txt
@@ -0,0 +1,169 @@
+ UHID - User-space I/O driver support for HID subsystem
+ ========================================================
+
+The HID subsystem needs two kinds of drivers. In this document we call them:
+
+ 1. The "HID I/O Driver" is the driver that performs raw data I/O to the
+ low-level device. Internally, they register an hid_ll_driver structure with
+ the HID core. They perform device setup, read raw data from the device and
+ push it into the HID subsystem and they provide a callback so the HID
+ subsystem can send data to the device.
+
+ 2. The "HID Device Driver" is the driver that parses HID reports and reacts on
+ them. There are generic drivers like "generic-usb" and "generic-bluetooth"
+ which adhere to the HID specification and provide the standardizes features.
+ But there may be special drivers and quirks for each non-standard device out
+ there. Internally, they use the hid_driver structure.
+
+Historically, the USB stack was the first subsystem to provide an HID I/O
+Driver. However, other standards like Bluetooth have adopted the HID specs and
+may provide HID I/O Drivers, too. The UHID driver allows to implement HID I/O
+Drivers in user-space and feed the data into the kernel HID-subsystem.
+
+This allows user-space to operate on the same level as USB-HID, Bluetooth-HID
+and similar. It does not provide a way to write HID Device Drivers, though. Use
+hidraw for this purpose.
+
+There is an example user-space application in ./samples/uhid/uhid-example.c
+
+The UHID API
+------------
+
+UHID is accessed through a character misc-device. The minor-number is allocated
+dynamically so you need to rely on udev (or similar) to create the device node.
+This is /dev/uhid by default.
+
+If a new device is detected by your HID I/O Driver and you want to register this
+device with the HID subsystem, then you need to open /dev/uhid once for each
+device you want to register. All further communication is done by read()'ing or
+write()'ing "struct uhid_event" objects. Non-blocking operations are supported
+by setting O_NONBLOCK.
+
+struct uhid_event {
+ __u32 type;
+ union {
+ struct uhid_create_req create;
+ struct uhid_data_req data;
+ ...
+ } u;
+};
+
+The "type" field contains the ID of the event. Depending on the ID different
+payloads are sent. You must not split a single event across multiple read()'s or
+multiple write()'s. A single event must always be sent as a whole. Furthermore,
+only a single event can be sent per read() or write(). Pending data is ignored.
+If you want to handle multiple events in a single syscall, then use vectored
+I/O with readv()/writev().
+
+The first thing you should do is sending an UHID_CREATE event. This will
+register the device. UHID will respond with an UHID_START event. You can now
+start sending data to and reading data from UHID. However, unless UHID sends the
+UHID_OPEN event, the internally attached HID Device Driver has no user attached.
+That is, you might put your device asleep unless you receive the UHID_OPEN
+event. If you receive the UHID_OPEN event, you should start I/O. If the last
+user closes the HID device, you will receive an UHID_CLOSE event. This may be
+followed by an UHID_OPEN event again and so on. There is no need to perform
+reference-counting in user-space. That is, you will never receive multiple
+UHID_OPEN events without an UHID_CLOSE event. The HID subsystem performs
+ref-counting for you.
+You may decide to ignore UHID_OPEN/UHID_CLOSE, though. I/O is allowed even
+though the device may have no users.
+
+If you want to send data to the HID subsystem, you send an HID_INPUT event with
+your raw data payload. If the kernel wants to send data to the device, you will
+read an UHID_OUTPUT or UHID_OUTPUT_EV event.
+
+If your device disconnects, you should send an UHID_DESTROY event. This will
+unregister the device. You can now send UHID_CREATE again to register a new
+device.
+If you close() the fd, the device is automatically unregistered and destroyed
+internally.
+
+write()
+-------
+write() allows you to modify the state of the device and feed input data into
+the kernel. The following types are supported: UHID_CREATE, UHID_DESTROY and
+UHID_INPUT. The kernel will parse the event immediately and if the event ID is
+not supported, it will return -EOPNOTSUPP. If the payload is invalid, then
+-EINVAL is returned, otherwise, the amount of data that was read is returned and
+the request was handled successfully.
+
+ UHID_CREATE:
+ This creates the internal HID device. No I/O is possible until you send this
+ event to the kernel. The payload is of type struct uhid_create_req and
+ contains information about your device. You can start I/O now.
+
+ UHID_DESTROY:
+ This destroys the internal HID device. No further I/O will be accepted. There
+ may still be pending messages that you can receive with read() but no further
+ UHID_INPUT events can be sent to the kernel.
+ You can create a new device by sending UHID_CREATE again. There is no need to
+ reopen the character device.
+
+ UHID_INPUT:
+ You must send UHID_CREATE before sending input to the kernel! This event
+ contains a data-payload. This is the raw data that you read from your device.
+ The kernel will parse the HID reports and react on it.
+
+ UHID_FEATURE_ANSWER:
+ If you receive a UHID_FEATURE request you must answer with this request. You
+ must copy the "id" field from the request into the answer. Set the "err" field
+ to 0 if no error occured or to EIO if an I/O error occurred.
+ If "err" is 0 then you should fill the buffer of the answer with the results
+ of the feature request and set "size" correspondingly.
+
+read()
+------
+read() will return a queued ouput report. These output reports can be of type
+UHID_START, UHID_STOP, UHID_OPEN, UHID_CLOSE, UHID_OUTPUT or UHID_OUTPUT_EV. No
+reaction is required to any of them but you should handle them according to your
+needs. Only UHID_OUTPUT and UHID_OUTPUT_EV have payloads.
+
+ UHID_START:
+ This is sent when the HID device is started. Consider this as an answer to
+ UHID_CREATE. This is always the first event that is sent.
+
+ UHID_STOP:
+ This is sent when the HID device is stopped. Consider this as an answer to
+ UHID_DESTROY.
+ If the kernel HID device driver closes the device manually (that is, you
+ didn't send UHID_DESTROY) then you should consider this device closed and send
+ an UHID_DESTROY event. You may want to reregister your device, though. This is
+ always the last message that is sent to you unless you reopen the device with
+ UHID_CREATE.
+
+ UHID_OPEN:
+ This is sent when the HID device is opened. That is, the data that the HID
+ device provides is read by some other process. You may ignore this event but
+ it is useful for power-management. As long as you haven't received this event
+ there is actually no other process that reads your data so there is no need to
+ send UHID_INPUT events to the kernel.
+
+ UHID_CLOSE:
+ This is sent when there are no more processes which read the HID data. It is
+ the counterpart of UHID_OPEN and you may as well ignore this event.
+
+ UHID_OUTPUT:
+ This is sent if the HID device driver wants to send raw data to the I/O
+ device. You should read the payload and forward it to the device. The payload
+ is of type "struct uhid_data_req".
+ This may be received even though you haven't received UHID_OPEN, yet.
+
+ UHID_OUTPUT_EV:
+ Same as UHID_OUTPUT but this contains a "struct input_event" as payload. This
+ is called for force-feedback, LED or similar events which are received through
+ an input device by the HID subsystem. You should convert this into raw reports
+ and send them to your device similar to events of type UHID_OUTPUT.
+
+ UHID_FEATURE:
+ This event is sent if the kernel driver wants to perform a feature request as
+ described in the HID specs. The report-type and report-number are available in
+ the payload.
+ The kernel serializes feature requests so there will never be two in parallel.
+ However, if you fail to respond with a UHID_FEATURE_ANSWER in a time-span of 5
+ seconds, then the requests will be dropped and a new one might be sent.
+ Therefore, the payload also contains an "id" field that identifies every
+ request.
+
+Document by:
+ David Herrmann <dh.herrmann@googlemail.com>
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 753d18ae0105..c0d908c1d1bc 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -508,6 +508,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Also note the kernel might malfunction if you disable
some critical bits.
+ cma=nn[MG] [ARM,KNL]
+ Sets the size of kernel global memory area for contiguous
+ memory allocations. For more information, see
+ include/linux/dma-contiguous.h
+
cmo_free_hint= [PPC] Format: { yes | no }
Specify whether pages are marked as being inactive
when they are freed. This is used in CMO environments
@@ -515,6 +520,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
a hypervisor.
Default: yes
+ coherent_pool=nn[KMG] [ARM,KNL]
+ Sets the size of memory pool for coherent, atomic dma
+ allocations, by default set to 256K.
+
code_bytes [X86] How many bytes of object code to print
in an oops report.
Range: 0 - 8192
@@ -2377,6 +2386,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
resume= [SWSUSP]
Specify the partition device for software suspend
+ Format:
+ {/dev/<dev> | PARTUUID=<uuid> | <int>:<int> | <hex>}
resume_offset= [SWSUSP]
Specify the offset from the beginning of the partition
diff --git a/Documentation/pinctrl.txt b/Documentation/pinctrl.txt
index d97bccf46147..e40f4b4e1977 100644
--- a/Documentation/pinctrl.txt
+++ b/Documentation/pinctrl.txt
@@ -152,11 +152,9 @@ static const struct foo_group foo_groups[] = {
};
-static int foo_list_groups(struct pinctrl_dev *pctldev, unsigned selector)
+static int foo_get_groups_count(struct pinctrl_dev *pctldev)
{
- if (selector >= ARRAY_SIZE(foo_groups))
- return -EINVAL;
- return 0;
+ return ARRAY_SIZE(foo_groups);
}
static const char *foo_get_group_name(struct pinctrl_dev *pctldev,
@@ -175,7 +173,7 @@ static int foo_get_group_pins(struct pinctrl_dev *pctldev, unsigned selector,
}
static struct pinctrl_ops foo_pctrl_ops = {
- .list_groups = foo_list_groups,
+ .get_groups_count = foo_get_groups_count,
.get_group_name = foo_get_group_name,
.get_group_pins = foo_get_group_pins,
};
@@ -186,13 +184,12 @@ static struct pinctrl_desc foo_desc = {
.pctlops = &foo_pctrl_ops,
};
-The pin control subsystem will call the .list_groups() function repeatedly
-beginning on 0 until it returns non-zero to determine legal selectors, then
-it will call the other functions to retrieve the name and pins of the group.
-Maintaining the data structure of the groups is up to the driver, this is
-just a simple example - in practice you may need more entries in your group
-structure, for example specific register ranges associated with each group
-and so on.
+The pin control subsystem will call the .get_groups_count() function to
+determine total number of legal selectors, then it will call the other functions
+to retrieve the name and pins of the group. Maintaining the data structure of
+the groups is up to the driver, this is just a simple example - in practice you
+may need more entries in your group structure, for example specific register
+ranges associated with each group and so on.
Pin configuration
@@ -606,11 +603,9 @@ static const struct foo_group foo_groups[] = {
};
-static int foo_list_groups(struct pinctrl_dev *pctldev, unsigned selector)
+static int foo_get_groups_count(struct pinctrl_dev *pctldev)
{
- if (selector >= ARRAY_SIZE(foo_groups))
- return -EINVAL;
- return 0;
+ return ARRAY_SIZE(foo_groups);
}
static const char *foo_get_group_name(struct pinctrl_dev *pctldev,
@@ -629,7 +624,7 @@ static int foo_get_group_pins(struct pinctrl_dev *pctldev, unsigned selector,
}
static struct pinctrl_ops foo_pctrl_ops = {
- .list_groups = foo_list_groups,
+ .get_groups_count = foo_get_groups_count,
.get_group_name = foo_get_group_name,
.get_group_pins = foo_get_group_pins,
};
@@ -640,7 +635,7 @@ struct foo_pmx_func {
const unsigned num_groups;
};
-static const char * const spi0_groups[] = { "spi0_1_grp" };
+static const char * const spi0_groups[] = { "spi0_0_grp", "spi0_1_grp" };
static const char * const i2c0_groups[] = { "i2c0_grp" };
static const char * const mmc0_groups[] = { "mmc0_1_grp", "mmc0_2_grp",
"mmc0_3_grp" };
@@ -663,11 +658,9 @@ static const struct foo_pmx_func foo_functions[] = {
},
};
-int foo_list_funcs(struct pinctrl_dev *pctldev, unsigned selector)
+int foo_get_functions_count(struct pinctrl_dev *pctldev)
{
- if (selector >= ARRAY_SIZE(foo_functions))
- return -EINVAL;
- return 0;
+ return ARRAY_SIZE(foo_functions);
}
const char *foo_get_fname(struct pinctrl_dev *pctldev, unsigned selector)
@@ -703,7 +696,7 @@ void foo_disable(struct pinctrl_dev *pctldev, unsigned selector,
}
struct pinmux_ops foo_pmxops = {
- .list_functions = foo_list_funcs,
+ .get_functions_count = foo_get_functions_count,
.get_function_name = foo_get_fname,
.get_function_groups = foo_get_groups,
.enable = foo_enable,
@@ -786,7 +779,7 @@ and spi on the second function mapping:
#include <linux/pinctrl/machine.h>
-static const struct pinctrl_map __initdata mapping[] = {
+static const struct pinctrl_map mapping[] __initconst = {
{
.dev_name = "foo-spi.0",
.name = PINCTRL_STATE_DEFAULT,
@@ -952,13 +945,13 @@ case), we define a mapping like this:
The result of grabbing this mapping from the device with something like
this (see next paragraph):
- p = pinctrl_get(dev);
+ p = devm_pinctrl_get(dev);
s = pinctrl_lookup_state(p, "8bit");
ret = pinctrl_select_state(p, s);
or more simply:
- p = pinctrl_get_select(dev, "8bit");
+ p = devm_pinctrl_get_select(dev, "8bit");
Will be that you activate all the three bottom records in the mapping at
once. Since they share the same name, pin controller device, function and
@@ -992,7 +985,7 @@ foo_probe()
/* Allocate a state holder named "foo" etc */
struct foo_state *foo = ...;
- foo->p = pinctrl_get(&device);
+ foo->p = devm_pinctrl_get(&device);
if (IS_ERR(foo->p)) {
/* FIXME: clean up "foo" here */
return PTR_ERR(foo->p);
@@ -1000,24 +993,17 @@ foo_probe()
foo->s = pinctrl_lookup_state(foo->p, PINCTRL_STATE_DEFAULT);
if (IS_ERR(foo->s)) {
- pinctrl_put(foo->p);
/* FIXME: clean up "foo" here */
return PTR_ERR(s);
}
ret = pinctrl_select_state(foo->s);
if (ret < 0) {
- pinctrl_put(foo->p);
/* FIXME: clean up "foo" here */
return ret;
}
}
-foo_remove()
-{
- pinctrl_put(state->p);
-}
-
This get/lookup/select/put sequence can just as well be handled by bus drivers
if you don't want each and every driver to handle it and you know the
arrangement on your bus.
@@ -1029,6 +1015,11 @@ The semantics of the pinctrl APIs are:
kernel memory to hold the pinmux state. All mapping table parsing or similar
slow operations take place within this API.
+- devm_pinctrl_get() is a variant of pinctrl_get() that causes pinctrl_put()
+ to be called automatically on the retrieved pointer when the associated
+ device is removed. It is recommended to use this function over plain
+ pinctrl_get().
+
- pinctrl_lookup_state() is called in process context to obtain a handle to a
specific state for a the client device. This operation may be slow too.
@@ -1041,14 +1032,30 @@ The semantics of the pinctrl APIs are:
- pinctrl_put() frees all information associated with a pinctrl handle.
+- devm_pinctrl_put() is a variant of pinctrl_put() that may be used to
+ explicitly destroy a pinctrl object returned by devm_pinctrl_get().
+ However, use of this function will be rare, due to the automatic cleanup
+ that will occur even without calling it.
+
+ pinctrl_get() must be paired with a plain pinctrl_put().
+ pinctrl_get() may not be paired with devm_pinctrl_put().
+ devm_pinctrl_get() can optionally be paired with devm_pinctrl_put().
+ devm_pinctrl_get() may not be paired with plain pinctrl_put().
+
Usually the pin control core handled the get/put pair and call out to the
device drivers bookkeeping operations, like checking available functions and
the associated pins, whereas the enable/disable pass on to the pin controller
driver which takes care of activating and/or deactivating the mux setting by
quickly poking some registers.
-The pins are allocated for your device when you issue the pinctrl_get() call,
-after this you should be able to see this in the debugfs listing of all pins.
+The pins are allocated for your device when you issue the devm_pinctrl_get()
+call, after this you should be able to see this in the debugfs listing of all
+pins.
+
+NOTE: the pinctrl system will return -EPROBE_DEFER if it cannot find the
+requested pinctrl handles, for example if the pinctrl driver has not yet
+registered. Thus make sure that the error path in your driver gracefully
+cleans up and is ready to retry the probing later in the startup process.
System pin control hogging
@@ -1094,13 +1101,13 @@ it, disables and releases it, and muxes it in on the pins defined by group B:
#include <linux/pinctrl/consumer.h>
-foo_switch()
-{
- struct pinctrl *p;
- struct pinctrl_state *s1, *s2;
+struct pinctrl *p;
+struct pinctrl_state *s1, *s2;
+foo_probe()
+{
/* Setup */
- p = pinctrl_get(&device);
+ p = devm_pinctrl_get(&device);
if (IS_ERR(p))
...
@@ -1111,7 +1118,10 @@ foo_switch()
s2 = pinctrl_lookup_state(foo->p, "pos-B");
if (IS_ERR(s2))
...
+}
+foo_switch()
+{
/* Enable on position A */
ret = pinctrl_select_state(s1);
if (ret < 0)
@@ -1125,8 +1135,6 @@ foo_switch()
...
...
-
- pinctrl_put(p);
}
The above has to be done from process context.
diff --git a/Documentation/power/power_supply_class.txt b/Documentation/power/power_supply_class.txt
index 9f16c5178b66..211831d4095f 100644
--- a/Documentation/power/power_supply_class.txt
+++ b/Documentation/power/power_supply_class.txt
@@ -84,6 +84,8 @@ are already charged or discharging, 'n/a' can be displayed (or
HEALTH - represents health of the battery, values corresponds to
POWER_SUPPLY_HEALTH_*, defined in battery.h.
+VOLTAGE_OCV - open circuit voltage of the battery.
+
VOLTAGE_MAX_DESIGN, VOLTAGE_MIN_DESIGN - design values for maximal and
minimal power supply voltages. Maximal/minimal means values of voltages
when battery considered "full"/"empty" at normal conditions. Yes, there is
diff --git a/Documentation/power/suspend-and-cpuhotplug.txt b/Documentation/power/suspend-and-cpuhotplug.txt
index f28f9a6f0347..e13dafc8e8f1 100644
--- a/Documentation/power/suspend-and-cpuhotplug.txt
+++ b/Documentation/power/suspend-and-cpuhotplug.txt
@@ -29,7 +29,7 @@ More details follow:
Write 'mem' to
/sys/power/state
- syfs file
+ sysfs file
|
v
Acquire pm_mutex lock
diff --git a/Documentation/thermal/cpu-cooling-api.txt b/Documentation/thermal/cpu-cooling-api.txt
new file mode 100644
index 000000000000..fca24c931ec8
--- /dev/null
+++ b/Documentation/thermal/cpu-cooling-api.txt
@@ -0,0 +1,32 @@
+CPU cooling APIs How To
+===================================
+
+Written by Amit Daniel Kachhap <amit.kachhap@linaro.org>
+
+Updated: 12 May 2012
+
+Copyright (c) 2012 Samsung Electronics Co., Ltd(http://www.samsung.com)
+
+0. Introduction
+
+The generic cpu cooling(freq clipping) provides registration/unregistration APIs
+to the caller. The binding of the cooling devices to the trip point is left for
+the user. The registration APIs returns the cooling device pointer.
+
+1. cpu cooling APIs
+
+1.1 cpufreq registration/unregistration APIs
+1.1.1 struct thermal_cooling_device *cpufreq_cooling_register(
+ struct cpumask *clip_cpus)
+
+ This interface function registers the cpufreq cooling device with the name
+ "thermal-cpufreq-%x". This api can support multiple instances of cpufreq
+ cooling devices.
+
+ clip_cpus: cpumask of cpus where the frequency constraints will happen.
+
+1.1.2 void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
+
+ This interface function unregisters the "thermal-cpufreq-%x" cooling device.
+
+ cdev: Cooling device pointer which has to be unregistered.
diff --git a/Documentation/thermal/sysfs-api.txt b/Documentation/thermal/sysfs-api.txt
index 1733ab947a95..88c02334e356 100644
--- a/Documentation/thermal/sysfs-api.txt
+++ b/Documentation/thermal/sysfs-api.txt
@@ -32,7 +32,8 @@ temperature) and throttle appropriate devices.
1.1 thermal zone device interface
1.1.1 struct thermal_zone_device *thermal_zone_device_register(char *name,
- int trips, void *devdata, struct thermal_zone_device_ops *ops)
+ int trips, int mask, void *devdata,
+ struct thermal_zone_device_ops *ops)
This interface function adds a new thermal zone device (sensor) to
/sys/class/thermal folder as thermal_zone[0-*]. It tries to bind all the
@@ -40,16 +41,17 @@ temperature) and throttle appropriate devices.
name: the thermal zone name.
trips: the total number of trip points this thermal zone supports.
+ mask: Bit string: If 'n'th bit is set, then trip point 'n' is writeable.
devdata: device private data
ops: thermal zone device call-backs.
.bind: bind the thermal zone device with a thermal cooling device.
.unbind: unbind the thermal zone device with a thermal cooling device.
.get_temp: get the current temperature of the thermal zone.
- .get_mode: get the current mode (user/kernel) of the thermal zone.
- - "kernel" means thermal management is done in kernel.
- - "user" will prevent kernel thermal driver actions upon trip points
+ .get_mode: get the current mode (enabled/disabled) of the thermal zone.
+ - "enabled" means the kernel thermal management is enabled.
+ - "disabled" will prevent kernel thermal driver action upon trip points
so that user applications can take charge of thermal management.
- .set_mode: set the mode (user/kernel) of the thermal zone.
+ .set_mode: set the mode (enabled/disabled) of the thermal zone.
.get_trip_type: get the type of certain trip point.
.get_trip_temp: get the temperature above which the certain trip point
will be fired.
@@ -82,7 +84,8 @@ temperature) and throttle appropriate devices.
1.3 interface for binding a thermal zone device with a thermal cooling device
1.3.1 int thermal_zone_bind_cooling_device(struct thermal_zone_device *tz,
- int trip, struct thermal_cooling_device *cdev);
+ int trip, struct thermal_cooling_device *cdev,
+ unsigned long upper, unsigned long lower);
This interface function bind a thermal cooling device to the certain trip
point of a thermal zone device.
@@ -91,6 +94,12 @@ temperature) and throttle appropriate devices.
cdev: thermal cooling device
trip: indicates which trip point the cooling devices is associated with
in this thermal zone.
+ upper:the Maximum cooling state for this trip point.
+ THERMAL_NO_LIMIT means no upper limit,
+ and the cooling device can be in max_state.
+ lower:the Minimum cooling state can be used for this trip point.
+ THERMAL_NO_LIMIT means no lower limit,
+ and the cooling device can be in cooling state 0.
1.3.2 int thermal_zone_unbind_cooling_device(struct thermal_zone_device *tz,
int trip, struct thermal_cooling_device *cdev);
@@ -103,6 +112,29 @@ temperature) and throttle appropriate devices.
trip: indicates which trip point the cooling devices is associated with
in this thermal zone.
+1.4 Thermal Zone Parameters
+1.4.1 struct thermal_bind_params
+ This structure defines the following parameters that are used to bind
+ a zone with a cooling device for a particular trip point.
+ .cdev: The cooling device pointer
+ .weight: The 'influence' of a particular cooling device on this zone.
+ This is on a percentage scale. The sum of all these weights
+ (for a particular zone) cannot exceed 100.
+ .trip_mask:This is a bit mask that gives the binding relation between
+ this thermal zone and cdev, for a particular trip point.
+ If nth bit is set, then the cdev and thermal zone are bound
+ for trip point n.
+ .match: This call back returns success(0) if the 'tz and cdev' need to
+ be bound, as per platform data.
+1.4.2 struct thermal_zone_params
+ This structure defines the platform level parameters for a thermal zone.
+ This data, for each thermal zone should come from the platform layer.
+ This is an optional feature where some platforms can choose not to
+ provide this data.
+ .governor_name: Name of the thermal governor used for this zone
+ .num_tbps: Number of thermal_bind_params entries for this zone
+ .tbp: thermal_bind_params entries
+
2. sysfs attributes structure
RO read only value
@@ -117,8 +149,10 @@ Thermal zone device sys I/F, created once it's registered:
|---type: Type of the thermal zone
|---temp: Current temperature
|---mode: Working mode of the thermal zone
+ |---policy: Thermal governor used for this zone
|---trip_point_[0-*]_temp: Trip point temperature
|---trip_point_[0-*]_type: Trip point type
+ |---trip_point_[0-*]_hyst: Hysteresis value for this trip point
Thermal cooling device sys I/F, created once it's registered:
/sys/class/thermal/cooling_device[0-*]:
@@ -167,16 +201,20 @@ temp
RO, Required
mode
- One of the predefined values in [kernel, user].
+ One of the predefined values in [enabled, disabled].
This file gives information about the algorithm that is currently
managing the thermal zone. It can be either default kernel based
algorithm or user space application.
- kernel = Thermal management in kernel thermal zone driver.
- user = Preventing kernel thermal zone driver actions upon
- trip points so that user application can take full
- charge of the thermal management.
+ enabled = enable Kernel Thermal management.
+ disabled = Preventing kernel thermal zone driver actions upon
+ trip points so that user application can take full
+ charge of the thermal management.
RW, Optional
+policy
+ One of the various thermal governors used for a particular zone.
+ RW, Required
+
trip_point_[0-*]_temp
The temperature above which trip point will be fired.
Unit: millidegree Celsius
@@ -188,6 +226,11 @@ trip_point_[0-*]_type
thermal zone.
RO, Optional
+trip_point_[0-*]_hyst
+ The hysteresis value for a trip point, represented as an integer
+ Unit: Celsius
+ RW, Optional
+
cdev[0-*]
Sysfs link to the thermal cooling device node where the sys I/F
for cooling device throttling control represents.
@@ -248,7 +291,8 @@ method, the sys I/F structure will be built like this:
|thermal_zone1:
|---type: acpitz
|---temp: 37000
- |---mode: kernel
+ |---mode: enabled
+ |---policy: step_wise
|---trip_point_0_temp: 100000
|---trip_point_0_type: critical
|---trip_point_1_temp: 80000
@@ -290,3 +334,38 @@ to a thermal_zone_device when it registers itself with the framework. The
event will be one of:{THERMAL_AUX0, THERMAL_AUX1, THERMAL_CRITICAL,
THERMAL_DEV_FAULT}. Notification can be sent when the current temperature
crosses any of the configured thresholds.
+
+5. Export Symbol APIs:
+
+5.1: get_tz_trend:
+This function returns the trend of a thermal zone, i.e the rate of change
+of temperature of the thermal zone. Ideally, the thermal sensor drivers
+are supposed to implement the callback. If they don't, the thermal
+framework calculated the trend by comparing the previous and the current
+temperature values.
+
+5.2:get_thermal_instance:
+This function returns the thermal_instance corresponding to a given
+{thermal_zone, cooling_device, trip_point} combination. Returns NULL
+if such an instance does not exist.
+
+5.3:notify_thermal_framework:
+This function handles the trip events from sensor drivers. It starts
+throttling the cooling devices according to the policy configured.
+For CRITICAL and HOT trip points, this notifies the respective drivers,
+and does actual throttling for other trip points i.e ACTIVE and PASSIVE.
+The throttling policy is based on the configured platform data; if no
+platform data is provided, this uses the step_wise throttling policy.
+
+5.4:thermal_cdev_update:
+This function serves as an arbitrator to set the state of a cooling
+device. It sets the cooling device to the deepest cooling state if
+possible.
+
+5.5:thermal_register_governor:
+This function lets the various thermal governors to register themselves
+with the Thermal framework. At run time, depending on a zone's platform
+data, a particular governor is used for throttling.
+
+5.6:thermal_unregister_governor:
+This function unregisters a governor from the thermal framework.
diff --git a/Documentation/trace/tracedump.txt b/Documentation/trace/tracedump.txt
new file mode 100644
index 000000000000..cba0decc3fc3
--- /dev/null
+++ b/Documentation/trace/tracedump.txt
@@ -0,0 +1,58 @@
+ Tracedump
+
+ Documentation written by Alon Farchy
+
+1. Overview
+============
+
+The tracedump module provides additional mechanisms to retrieve tracing data.
+It can be used to retrieve traces after a kernel panic or while the system
+is running in either binary format or plaintext. The dumped data is compressed
+with zlib to conserve space.
+
+2. Configuration Options
+========================
+
+CONFIG_TRACEDUMP - enable the tracedump module.
+CONFIG_TRACEDUMP_PANIC - dump to console on kernel panic
+CONFIG_TRACEDUMP_PROCFS - add file /proc/tracedump for userspace access.
+
+3. Module Parameters
+====================
+
+format_ascii
+
+ If 1, data will dump in human-readable format, ordered by time.
+ If 0, data will be dumped as raw pages from the ring buffer,
+ ordered by CPU, followed by the saved cmdlines so that the
+ raw data can be decoded. Default: 0
+
+panic_size
+
+ Maximum amount of compressed data to dump during a kernel panic
+ in kilobytes. This only applies if format_ascii == 1. In this case,
+ tracedump will compress the data, check the size, and if it is too big
+ toss out some data, compress again, etc, until the size is below
+ panic_size. Default: 512KB
+
+compress_level
+
+ Determines the compression level that zlib will use. Available levels
+ are 0-9, with 0 as no compression and 9 as maximum compression.
+ Default: 9.
+
+4. Usage
+========
+
+If configured with CONFIG_TRACEDUMP_PROCFS, the tracing data can be pulled
+by reading from /proc/tracedump. For example:
+
+ # cat /proc/tracedump > my_tracedump
+
+Tracedump will surround the dump with a magic word (TRACEDUMP). Between the
+magic words is the compressed data, which can be decompressed with a standard
+zlib implementation. After decompression, if format_ascii == 1, then the
+output should be readable.
+
+If format_ascii == 0, the output should be in binary form, delimited by
+CPU_END. After the last CPU should be the saved cmdlines, delimited by |.
diff --git a/Documentation/trace/tracelevel.txt b/Documentation/trace/tracelevel.txt
new file mode 100644
index 000000000000..b282dd2b329b
--- /dev/null
+++ b/Documentation/trace/tracelevel.txt
@@ -0,0 +1,42 @@
+ Tracelevel
+
+ Documentation by Alon Farchy
+
+1. Overview
+===========
+
+Tracelevel allows subsystem authors to add trace priorities to
+their tracing events. High priority traces will be enabled
+automatically at boot time.
+
+This module is configured with CONFIG_TRACELEVEL.
+
+2. Usage
+=========
+
+To give an event a priority, use the function tracelevel_register
+at any time.
+
+ tracelevel_register(my_event, level);
+
+my_event corresponds directly to the event name as defined in the
+event header file. Available levels are:
+
+ TRACELEVEL_ERR 3
+ TRACELEVEL_WARN 2
+ TRACELEVEL_INFO 1
+ TRACELEVEL_DEBUG 0
+
+Any event registered at boot time as TRACELEVEL_ERR will be enabled
+by default. The header also exposes the function tracelevel_set_level
+to change the trace level at runtime. Any trace event registered with the
+specified level or higher will be enabled with this call.
+
+A userspace handle to tracelevel_set_level is available via the module
+parameter 'level'. For example,
+
+ echo 1 > /sys/module/tracelevel/parameters/level
+
+Is logically equivalent to:
+
+ tracelevel_set_level(TRACELEVEL_INFO);
diff --git a/Documentation/video/tegra_dc_ext.txt b/Documentation/video/tegra_dc_ext.txt
new file mode 100644
index 000000000000..6fc3394c6652
--- /dev/null
+++ b/Documentation/video/tegra_dc_ext.txt
@@ -0,0 +1,83 @@
+The Tegra display controller (dc) driver has two frontends that implement
+different interfaces:
+1. The traditional fbdev interface, implemented in drivers/video/tegra/fb.c
+2. A new interface that exposes the unique capabilities of the controller,
+ implemented in drivers/video/tegra/dc/ext
+
+The Tegra fbdev capabilities are documented in fb/tegrafb.c [TODO]. This
+document will describe the new "extended" dc interface.
+
+The extended interface is only available when its frontend has been compiled
+in, i.e., CONFIG_TEGRA_DC_EXTENSIONS=y. The dc_ext frontend can coexist with
+tegrafb, but takes precedence (more on that later).
+
+The dc_ext frontend's interface to userspace is exposed through a set of
+device nodes: one for each controller (generally /dev/tegra_dc_N), and one
+"control" node (generally /dev/tegra_dc_ctrl). Communication through these
+device nodes is done with special IOCTLs. There is also an event delivery
+mechanism; userspace can wait for and receive events with read() or poll().
+
+The tegra_dc_N interface is stateful; each fresh open() of the device node
+creates a client instance. In order to prevent multiple processes from
+"fighting" for the hardware, only one client instance is permitted to control
+certain resources at a time, on a first-come, first-serve basis.
+
+Overview of tegra_dc_N IOCTLs:
+SET_NVMAP_FD: This is used to associate your nvmap client with this dc_ext
+ client instance. This is necessary so that the kernel can
+ appropriately enforce permissions on nvmap buffers.
+
+GET_WINDOW: A dc_ext client must call this on each window that it wishes to
+ control. This strictly enforces a single dc_ext client on a
+ window at a time.
+
+PUT_WINDOW: A dc_ext client may call this to release a window previously
+ reserved with GET_WINDOW.
+
+FLIP: This ioctl is used to actually display an nvmap surface using one or
+ more window. Each time a dc_ext client performs a FLIP, the request is
+ put on a flip queue and executed asynchronously (the FLIP ioctl will
+ return immediately). Various parameters are available in the
+ tegra_dc_ext_flip structure.
+ A dc_ext client may only use this on windows that it has previously
+ reserved with a successful GET_WINDOW call.
+
+GET_CURSOR: This is analogous to GET_WINDOW, but for the hardware cursor
+ instead of a window.
+
+PUT_CURSOR: This is analogous to PUT_WINDOW, but for the hardware cursor
+ instead of a window.
+
+SET_CURSOR_IMAGE: This is used to change the hardware cursor image. May only
+ be used by a client who has successfully performed a
+ GET_CURSOR call.
+
+SET_CURSOR: This is used to actually place the hardware cursor on the screen.
+ May only be used by a client who has successfully performed a
+ GET_CURSOR call.
+
+SET_CSC: This may be used to set a color space conversion matrix on a window.
+ A dc_ext client may only use this on windows that it has previously
+ reserved with a successful GET_WINDOW call.
+
+GET_STATUS: This is used to retrieve general status about the dc.
+
+GET_VBLANK_SYNCPT: This is used to retrieve the auto-incrementing vblank
+ syncpoint for the head associated with this dc.
+
+
+Overview of tegra_dc_ctrl IOCTLs:
+GET_NUM_OUTPUTS: This returns the number of available output devices on the
+ system, which may exceed the number of display controllers.
+
+GET_OUTPUT_PROPERTIES: This returns data about the given output, such as what
+ kind of output it is, whether it's currently associated
+ with a head, etc.
+
+GET_OUTPUT_EDID: This returns the binary EDID read from the device connected
+ to the given output, if any.
+
+SET_EVENT_MASK: A dc_ext client may call this ioctl with a bitmask of events
+ that it wishes to receive. These events will then be
+ available to that client on a subsequent read() on the same
+ file descriptor.
diff --git a/Documentation/video4linux/README.tegra b/Documentation/video4linux/README.tegra
new file mode 100644
index 000000000000..610eeded2ae9
--- /dev/null
+++ b/Documentation/video4linux/README.tegra
@@ -0,0 +1,180 @@
+Theory of Operations
+====================
+
+There are three separate drivers within the V4L2 framework that are interesting
+to Tegra-based platforms. They are as follows:
+
+Image Sensor driver
+===================
+This driver communicates only with the image sensor hardware (typically via
+I2C transactions), and is intentionally PLATFORM-AGNOSTIC. Existing image
+sensor drivers can be found in drivers/media/video. For example, the ov9740
+driver communicates with the Omnivision OV9740 image sensor with built-in ISP.
+
+Some of the things that this driver is responsible for are:
+
+Setting up the proper output format of the image sensor,
+
+Setting up image output extents
+
+Setting up capture and crop regions
+
+Camera Host driver
+==================
+This driver communicates only with the camera controller on a given platform,
+and is intentionally IMAGE-SENSOR-AGNOSTIC. Existing camera host drivers
+can be found in drivers/media/video, of which tegra_v4l2_camera.c is the
+example that is interesting to us. This camera host driver knows how to
+program the CSI/VI block on Tegra2 and Tegra3 platforms.
+
+Some of the things that this driver is responsible for are:
+
+Setting up the proper input format (image frame data flowing from the image
+sensor to the camera host),
+
+Setting up the proper output format (image frame data flowing from the
+camera host to system memory),
+
+Programming the DMA destination to receive the image frame data,
+
+Starting and stopping the reception of image frame data.
+
+Videobuf driver
+===============
+This driver is responsible for the allocation and deallocation of buffers that
+are used to hold image frame data. Different camera hosts have different
+DMA requirements, which makes it necessary to allow for different methods of
+buffer allocation. For example, the Tegra2 and Tegra3 camera host cannot
+DMA via a scatter-gather list, so the image frame buffers must be physically
+contiguous. The videobuf-dma-contig.c videobuf driver can be found in
+drivers/media/video, and contains a videobuf implementation that allocates
+physically contiguous regions. One can also have a videobuf driver that
+uses a different allocator like nvmap.
+
+The nvhost driver and Syncpts
+=============================
+
+The camera host driver (tegra_v4l2_camera) has a dependency on the nvhost
+driver/subsystem in order to make use of syncpts. In other words, the camera
+host driver is a client of nvhost.
+
+A syncpt is essentially an incrementing hardware counter that triggers an
+interrupt when a certain number (or threshold) is reached. The interrupt,
+however, is hidden from clients of nvhost. Instead, asynchronous completion
+notification is done via calling an nvhost routine that goes to sleep, and
+wakes up upon completion.
+
+Tegra has a number of syncpts that serve various purposes. The two syncpts
+that are used by the camera host driver are the VI and CSI syncpts. Other
+syncpts are used in display, etc.
+
+A syncpt increments when a certain hardware condition is met.
+
+The public operations available for a syncpt are:
+
+nvhost_syncpt_read_ext(syncpt_id) - Read the current syncpt counter value.
+nvhost_syncpt_wait_timeout_ext(syncpt_id, threshold, timeout) - Go to sleep
+ until the syncpt value reaches the threshold, or until the timeout
+ expires.
+nvhost_syncpt_cpu_incr_ext(syncpt_id) - Manually increment a syncpt.
+
+Syncpts are used in the camera host driver in order to signify the completion
+of an operation. The typical usage case can be illustrated by summarizing
+the steps that the camera host driver takes in capturing a single frame
+(this is called one-shot mode, where we program up each frame transfer
+separately):
+
+0) At the very start, read the current syncpt values and remember them. See
+ tegra_camera_activate() -> tegra_camera_save_syncpts(), where we read
+ the current values and store them in pcdev->syncpt_csi and pcdev->syncpt_vi.
+
+1) Program the camera host registers to prepare to receive frames from the
+ image sensor using the proper input format. Note that we are at this
+ point NOT telling the camera host to DMA a frame. That comes later. See
+ tegra_camera_capture_setup(), where we do a whole bunch of magical
+ register writes depending on our input format, output format, image extents,
+ etc.
+
+2) Increment our remembered copies of the current syncpt values according to
+ how many syncpt increments we are expecting for the given operation we
+ want to perform. For capturing a single frame, we are expecting a single
+ increment on the CSI syncpt when the reception of the frame is complete, and
+ a single increment on the VI syncpt when the DMA of the frame is complete.
+ See tegra_camera_capture_start(), where we increment pcdev->syncpt_csi
+ and pcdev->syncpt_vi.
+
+3) Program the DMA destination registers, and toggle the bit in
+ TEGRA_CSI_PIXEL_STREAM_PPA_COMMAND to do the DMA on the next available
+ frame. See tegra_camera_capture_start() for this.
+
+4) Call nvhost_syncpt_wait_timeout_ext() to wait on the CSI syncpt threshold.
+ Remember that we incremented our local syncpt values in step 2. Those
+ new values become the threshold to wait for. See
+ tegra_camera_capture_start().
+
+5) When the frame finishes its transfer from the image sensor to the camera
+ host, the CSI syncpt hardware counter will be incremented by hardware.
+ Since the hardware syncpt value will now match the threshold, our call to
+ nvhost_syncpt_wait_timeout_ext() in step 4 wakes up.
+
+6) We now tell the camera host to get ready for the DMA to complete. We do
+ this by writing again to TEGRA_CSI_PIXEL_STREAM_PPA_COMMAND. See
+ tegra_camera_capture_stop().
+
+7) When the camera host finishes its DMA, we expect the hardware to increment
+ the VI syncpt. Therefore, we call nvhost_syncpt_wait_timeout_ext() on
+ the VI syncpt with our new threshold that we got by the incrementing in
+ step 2. See tegra_camera_capture_stop().
+
+8) When the camera host finally finishes its DMA, the VI syncpt hardware
+ counter increments. Since our VI syncpt threshold is met, the call to
+ nvhost_syncpt_wait_timeout_ext() wakes up, and we are done. See
+ tegra_camera_capture_stop().
+
+9) To capture the next frame, go back to step 2. The tegra_v4l2_camera driver
+ calls tegra_camera_capture_setup at the beginning, and then a worker thread
+ repeatedly calls tegra_camera_capture_start() and
+ tegra_camera_capture_stop(). See tegra_camera_work() ->
+ tegra_camera_capture_frame().
+
+Note for VIP: Only a single syncpt is used for the VIP path. We use the
+continuous VIP VSYNC syncpt to determine the completion of a frame transfer.
+In addition, to start and finish the capture of a frame, the
+VI_CAMERA_CONTROL register is used. See tegra_camera_capture_start() and
+tegra_camera_capture_stop() to see how that register is used for the VIP path.
+Essentially, steps 4, 5, and 6 are eliminated, and instead of writing to
+TEGRA_CSI_PIXEL_STREAM_PPA_COMMAND or TEGRA_CSI_PIXEL_STREAM_PPB_COMMAND,
+we write to VI_CAMERA_CONTROL to achieve the same purpose for VIP.
+
+VIP versus CSI
+==============
+VI_VI_CORE_CONTROL bits 26:24 (INPUT_TO_CORE_EXT) should be set to 0
+(use INPUT_TO_CORE).
+
+VI_VI_INPUT_CONTROL bit 1 (VIP_INPUT_ENABLE) should be set to 1 (ENABLED),
+bit 26:25 (SYNC_FORMAT) should be set to 1 (ITU656), and bit 27 (FIELD_DETECT)
+should be set to 1 (ENABLED).
+
+VI_H_DOWNSCALE_CONTROL bit 0 (INPUT_H_SIZE_SEL) should be set to 0 (VIP),
+and bits 3:2 (INPUT_H_SIZE_SEL_EXT) should be set to 0 (USE INPUT_H_SIZE_SEL).
+
+Rather than placing the image width and height into VI_CSI_PPA_H_ACTIVE and
+VI_CSI_PPA_V_ACTIVE, respectively (or the CSI B counterparts), use
+VI_VIP_H_ACTIVE and VI_VIP_V_ACTIVE bits 31:16. Bits 15:0 of VI_VIP_H_ACTIVE
+and VI_VIP_V_ACTIVE are the number of clock cycles to wait after receiving
+HSYNC or VSYNC before starting. This can be used to adjust the vertical and
+horizontal back porches.
+
+VI_PIN_INPUT_ENABLE should be set to 0x00006fff, which enables input pins
+VHS, VVS, and VD11..VD0.
+
+VI_PIN_INVERSION bits 1 and 2 can be used to invert input pins VHS and VVS,
+respectively.
+
+VI_CONT_SYNCPT_VIP_VSYNC bit 8 (enable VIP_VSYNC) should be set to 1, and
+bits 7:0 should hold the index of the syncpt to be used. When this syncpt
+is enabled, the syncpt specified by the index will increment by 1 every
+time a VSYNC occurs. We use this syncpt to signal frame completion.
+
+VI_CAMERA_CONTROL bit 0 should be set to 1 to start capturing. Writing a 0
+to this bit is ignored, so to stop capturing, write 1 to bit 2.
diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577de918f..a6ab4b62d926 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
The cmwq design differentiates between the user-facing workqueues that
subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
The backend is called gcwq. There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues. Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
Subsystems and drivers can create and queue work items through special
workqueue API functions as they see fit. They can influence some
aspects of the way the work items are executed by setting flags on the
workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more. To get a detailed overview refer to the API description of
alloc_workqueue() below.
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq. For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool. For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
For any worker pool implementation, managing the concurrency level
(how many execution contexts are active) is an important issue. cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
Minimal to save resources and sufficient in that the system is used at
its full capacity.
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler. The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers. Generally, work items are not expected to
-hog a CPU and consume many cycles. That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal. As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items. This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler. The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers. Generally, work items are
+not expected to hog a CPU and consume many cycles. That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal. As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items. This allows using a minimal number of workers
+without losing execution bandwidth.
Keeping idle workers around doesn't cost other than the memory space
for kthreads, so cmwq holds onto idle ones for a while before killing
them.
For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible. The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible. The responsibility of regulating
concurrency level is on the users. There is also a flag to mark a
bound wq to ignore the concurrency management. Please refer to the
API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
WQ_HIGHPRI
- Work items of a highpri wq are queued at the head of the
- worklist of the target gcwq and start execution regardless of
- the current concurrency level. In other words, highpri work
- items will always start execution as soon as execution
- resource is available.
+ Work items of a highpri wq are queued to the highpri
+ thread-pool of the target gcwq. Highpri thread-pools are
+ served by worker threads with elevated nice level.
- Ordering among highpri work items is preserved - a highpri
- work item queued after another highpri work item will start
- execution after the earlier highpri work item starts.
-
- Although highpri work items are not held back by other
- runnable work items, they still contribute to the concurrency
- level. Highpri work items in runnable state will prevent
- non-highpri work items from starting execution.
-
- This flag is meaningless for unbound wq.
+ Note that normal and highpri thread-pools don't interact with
+ each other. Each maintain its separate pool of workers and
+ implements concurrency management among its workers.
WQ_CPU_INTENSIVE
Work items of a CPU intensive wq do not contribute to the
concurrency level. In other words, runnable CPU intensive
- work items will not prevent other work items from starting
- execution. This is useful for bound work items which are
- expected to hog CPU cycles so that their execution is
- regulated by the system scheduler.
+ work items will not prevent other work items in the same
+ thread-pool from starting execution. This is useful for bound
+ work items which are expected to hog CPU cycles so that their
+ execution is regulated by the system scheduler.
Although CPU intensive work items don't contribute to the
concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
This flag is meaningless for unbound wq.
- WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
- This combination makes the wq avoid interaction with
- concurrency management completely and behave as a simple
- per-CPU execution context provider. Work items queued on a
- highpri CPU-intensive wq start execution as soon as resources
- are available and don't affect execution of other work items.
-
@max_active:
@max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
35 w2 wakes up and finishes
Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS EVENT
- 0 w1 and w2 start and burn CPU
- 5 w1 sleeps
- 10 w2 sleeps
- 10 w0 starts and burns CPU
- 15 w0 sleeps
- 15 w1 wakes up and finishes
- 20 w2 wakes up and finishes
- 25 w0 wakes up and burns CPU
- 30 w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
TIME IN MSECS EVENT
0 w0 starts and burns CPU