Age | Commit message (Collapse) | Author |
|
This commit:
tegra:video:nvmap: optimize cache_maint operation.
added some dead code
Original-Change-Id: I9193a7865f5e3126b06950efaf9b5a4b6c7fd919
Rebase-Id: R30ba7719d8aa6ad48d708714396299b154cf0131
|
|
Original-Change-Id: I2ffeaf6f8dfeb279b40ca6f69f6c9157401a746a
Rebase-Id: R5a6d087b717731c957b016f903fb82b4ea22b92d
|
|
Bug 764354
Original-Change-Id: I807433ff825bed1fe91ce0cf50a2b3691c64ef0a
Reviewed-on: http://git-master/r/12227
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Tested-by: Scott Williams <scwilliams@nvidia.com>
Original-Change-Id: I3da91a438f98f2f51618446ce024f3fefd726a19
Rebase-Id: Rb1717b1f80aaf0242f4da555ce16c06946b7d072
|
|
nvmap's debugfs had a bad format so it was
very difficult to read the outputs. this commit
fixes it and added total allocation size along
with it
Bug 813891
Original-Change-Id: I6e3165b3ff917d9510d39f1e35b8e6b59c086592
Reviewed-on: http://git-master/r/27349
Reviewed-by: Donghan Ryu <dryu@nvidia.com>
Tested-by: Donghan Ryu <dryu@nvidia.com>
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Rebase-Id: R0959d3648c3fe8e1d0a4bbbca5e79f0b5a744c6f
|
|
video:tegra:nvmap: Clean whole L1 instead of cleaning by MVA
For large allocations, cleaning each page of the allocation can
take a significant amount of time. If an allocation that nvmap needs
to clean or invalidate out of the cache is significantly larger than
the cache, just flush the entire cache by set/ways.
bug 788967
Reviewed-on: http://git-master/r/19354
(cherry picked from commit c01c12e63b1476501204152356867aeb5091fb80)
tegra:video:nvmap: optimize cache_maint operation.
optimize cache_maint operation for carveout and heap memories.
flush carveout memory allocations on memory free.
Bug 761637
Reviewed-on: http://git-master/r/21205
Conflicts:
drivers/video/tegra/nvmap/nvmap_dev.c
drivers/video/tegra/nvmap/nvmap_heap.c
drivers/video/tegra/nvmap/nvmap_ioctl.c
(cherry picked from commit 731df4df5e895e1d4999359d6d5939fc2095f883)
tegra:video:nvmap: optimize cache flush for system heap pages.
optimize cache flush for pages allocated from system heap.
Bug 788187
Reviewed-on: http://git-master/r/21687
(cherry picked from commit 3f318911ad91410aed53c90494210e2b8f74308b)
Original-Change-Id: Ia7b90ba0b50acfef1b88dd8095219c51733e027f
Reviewed-on: http://git-master/r/23465
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Tested-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Rebase-Id: R04f618f88ed1d2c7a680d51a8c5113f42de3f667
|
|
Bug 786016
Original-Change-Id: Ic72c57b710a305851dfea3dda3eb217156683b39
Reviewed-on: http://git-master/r/17795
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: R9839c206d5606463e124c59f733282561ff8a48d
|
|
Enabled mutex debugging reavealed potential deadlocks
introduced with compaction.
Handle spin lock replaced with mutex. Heap functions cannot be
protected with spinlock because they call kernel slab allocation
functions which cannot be called from atomic context.
nvmap_client ref_lock is also replaced with mutex. Otherwise we
cannot access heap parameters protected by mutex nvmap_handle lock.
Extra locking for handle->owner removed.
bug 793364
Original-Change-Id: I635ce9ebf259dd7bf8802457567f93b7be5795ea
Reviewed-on: http://git-master/r/19850
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Tested-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Daniel Willemsen <dwillemsen@nvidia.com>
Rebase-Id: Reaa132703e278d75371d5e2b25426794aa8e0e4e
|
|
There are places where nvmap_free_handle_id is called
when interrupts are disabled and mutex cannot be used as
nvmap handle lock.
Original-Change-Id: Icc220fe627c08f21c677d936a54f70c818dc8e8c
Reviewed-on: http://git-master/r/19489
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: Rb5a58e8226ad14340d1acae007d6b632960fae16
|
|
bug 762482
Original-Change-Id: Ifadebc1b0c4eb0df89e179091acca0ff6e527e56
Reviewed-on: http://git-master/r/15743
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Tested-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: R639e7f09f44c8919bd57a16a577b87db91160555
|
|
This reverts commit be7b9ce20d645c2c9293441830ee33a0a5fc489f.
Rebase-Id: R34033f7a7ed72aeb1e2a83ad5a09c219d3254048
|
|
Need cache maintenance on rw_handle to remove
display garbage issue which happens randomly.
Change-Id: I73606ae6551c0e75058e055f4a19e5f074a47004
Signed-off-by: Greg Roth <groth@nvidia.com>
|
|
Change-Id: I51fe70b92f256951e68c6bbd21e6b4d6081f4731
|
|
This reverts commit b3cc1d84d0b962fe80fc297d2e2417c3157508b6.
|
|
This reverts commit 2d49bf33f3885aab293f12d54447f66e911e3226.
|
|
The kernel now receives wait tracking data (similar to gathers and
relocs) and compares the current syncpt with the threshold value.
If it's old, it gets a kernel mapping and rewrites the method data
to use a kernel reserved syncpt that is always 0 (so trivially pops
when seen by the HW).
Patch has dependency to the user-space patches
Submitted on behalf of: Chris Johnson <cjohnson@nvidia.com>
original work by: Chris Johnson <cjohnson@nvidia.com>
Change-Id: I4d4e5d3b49cab860485c4172f87247f5b4f5ea6e
|
|
An attempt had been made to reduce the number of pte operations
while patching relocs. The optimization was incorrectly coded
and was not providing the expected speedup.
Credit for the find goes to Peter Pipkorn.
Change-Id: Ic83b20ee470e54d5053f747dbcbdf7b038b7c7c4
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
-Add a module param to enable/disable carveout killer
-Fix race condition in code to wait for something to free memory
after firing carveout killer
-Fix the check for current so we always compare task->group_leaders
Change-Id: Ie030978827dce6b0fbbfa1db0d80e4abe59eaa51
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
-Modify the carveout killer to only kill tasks with lower priorities
than the one that's trying to allocate
-After delivering a sigkill to a task, wait for something to exit and
cleanup before retrying the allocation
Change-Id: If62b6ed008a73fc3c347ff26735a83eee284909e
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
No need to maintain a reference to the task struct if the client
is a kernel thread. In this case just set the task to NULL.
Change-Id: Ica4785388932f6b298eeb0da04b78b0e1cdc3a44
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
This change attempts to reclaim carveout memory by killing
other carveout users when an allocation fails. Processes
are killed in order of priority from lowest to highest, and then
from largest to smallest users.
Change-Id: Iee8a6f36269bc8165d691000a153dbf9f4337775
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
Change-Id: I1ec34fd4a6bb21a6d84912a7228c209f459261be
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
A struct nvmap_handle may be shared by multiple clients. If the
original client (the handle "owner") is destroyed, but the handle is
still referenced by other clients, h->owner points to freed memory. To
prevent this, clear h->owner when the owner frees its reference to that
struct nvmap_handle.
Change-Id: I54722091568ce2058f5988e5f6e00e68605a8100
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
Fix the way the total number of carveout allocations is
managed per client.
Change-Id: I3e12e2a98a74cafc1f4c51a48e3c3c549e930160
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
This reverts commit e3ad53ad739afae7e8a4252c807a195e2311cfa7.
|
|
This modifies the api to allow the user to specify a name
for their clients. This will allow the system to track
allocations from the kernel by name.
Change-Id: I44aad209bc54e72126be3bebfe416b30291d206c
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
Moves the file tracking clients to debugfs
Add a debugfs file to track the list of allocations per client
Change-Id: I2bb683e3ac0599fa05d962c79ef0b7cbd0007d75
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
Prints a log message if the nvmap allocate ioctl fails.
Change-Id: Ia0777bc2fcd665dafff0f8948b01faad3f552d72
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
In the current implementation handles hold references to a
client and clients hold references to their handles. As a
result when a process terminates it's handles can't be cleaned
up and we leak memory. Instead only hold references to handles
from clients.
Change-Id: Iba699e740a043deaf0a78b13b4ea01544675078f
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
This patch adds the ability to track the total allocations in a
given carveout heap by client. It also adds a sys file to print
the list of clients, their pids and their respective carveout sizes
Change-Id: I34fc97c3be574d2bd30d7594320ff05f6e13c476
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
Change-Id: Icfe552ad4a968329a1a2959d5b438062587a83b6
Signed-off-by: Colin Cross <ccross@android.com>
|
|
The framebuffer driver needs to be able to arbitrarily pin whatever
gets handed to it. Regardless of the interface used, functions need
to unpin as soon as they finish using the gart anyway.
Change-Id: Ida8aea2fb6eaca8bcbf3ae72f8dfa849dc198542
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
remove the dependency that nvmap has on the arm_attrib_allocator
and the lowmem in PTEs change by adding a private page allocator
utility function and calling vm_map_ram unconditionally for all
sysmem handles.
also, add Kconfig variables to allow platforms to disallow the
SYSMEM heap, and to optionally restrict the SYSMEM and IOVMM
heaps to just HIGHMEM.
Change-Id: I3dab1c7323f54a8ab3994dc672b27fd79a9057d7
Signed-off-by: Gary King <gking@nvidia.com>
|
|
Low mem pages are allocated in larger super pages and their caching
attributes can't be controlled on a per page basis. This patch
forces nvmap to map out of highmem pages which are guaranteed to have
page mappings.
Change-Id: Id3921342ecceb0345d43365d4dd90b82ca8cfd11
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
a >= vs > error when checking the operating region of the read and
write ioctls was causing failures when reading the last byte of a handle.
the super-user node (knvmap) wasn't registered correctly due to a cut-
and-paste error, and the regular user node was assigned super-user
priveleges.
noref pinning wasn't correctly validating that the specified handle
existed before pinning it, which caused the reference count for the
handle to become imbalanced on a subsequent unpin
Change-Id: I9985b85023705b00389a53fb962c3b60d62da6b8
Signed-off-by: Gary King <gking@nvidia.com>
|
|
nvmap provides an interface for user- and kernel-space clients to
allocate and access memory "handles" which can be pinned to enable
the memory to be shared with DMA devices on the system, and may
also be mapped (using caller-specified cache attributes) so that
they are directly accessible by the CPU.
the memory handle object gives clients a common API to allocate from
multiple types of memory: platform-reserved physically contiguous
"carveout" memory, physically contiguous (order > 0) OS pages,
or physically discontiguous order-0 OS pages that can be remapped
into a contiguous region of the DMA device's virtual address space
through the tegra IOVMM subsystem.
unpinned and unmapped memory handles are relocatable at run-time
by the nvmap system. handles may also be shared between multiple
clients, allowing (for example) a window manager and its client
applications to directly share framebuffers
Change-Id: Ie8ead17fe7ab64f1c27d922b1b494f2487a478b6
Signed-off-by: Gary King <gking@nvidia.com>
|