Age | Commit message (Collapse) | Author |
|
|
|
Conflicts:
drivers/media/video/tegra_v4l2_camera.c
reverted to current driver supporting ACM rather than CSI2
drivers/media/video/videobuf2-dma-nvmap.c
drivers/video/tegra/host/Makefile
|
|
The tweaks are only specific to r16-r2 branch
and will not go into mainline.
- Pass nvmap memory handle to the user through
the mmap'd buffer allocated by videobuf2 client.
- Allow the "user" nvmap client to access the
nvmap memory handle of "videobuf2-dma-nvmap" client.
Re-arranging the copyright message in nvmap_dev.c
for Automatic validation to pass.
Bug 1369083
Change-Id: Ia27d172253860e79557911c2e848bc9084d662d4
Signed-off-by: Vikram Fugro <vfugro@nvidia.com>
Reviewed-on: http://git-master/r/309494
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Kaustubh Purandare <kpurandare@nvidia.com>
Reviewed-by: Winnie Hsu <whsu@nvidia.com>
Reviewed-by: Matthew Pedro <mapedro@nvidia.com>
|
|
|
|
When allocation is bigger than L2 size it is more optimal to
flush or writeback whole L2 instead of doing maintenance
for each allocated page.
bug 983964
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Change-Id: Ieaa70875b92920567ad7cd75eca6eac8197f46de
Reviewed-on: http://git-master/r/108511
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
A handle's usecount used to be incremented once during the mmap ioctl,
and decremented when the mapping is closed by the kernel. However, that
fails if a mapping cloned, for example if the mapping was split due to
a munmap, or (presumably) during fork, as the decrement will then happen
for each cloned mapping.
Therefore increment the usecount when a mapping is opened.
Also fix a BUG_ON() that would have caught this bug, if it wouldn't
have done the check by checking if the unsigned usecount field is
less than zero.
Bug 1033981
Change-Id: I72ac9361a19e44f91ffd6b1126f4632e0f7b6726
Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com>
Reviewed-on: http://git-master/r/123710
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
|
|
This patch allows nvmap to use the standard IOMMU API as its backend
engine instead of the legacy tegra IOVMM by Kconfig.
Enable TEGRA_IOMMU_{GART,SMMU} under IOMMU in config, and it
automatically disables IOVMM, instead.
Change-Id: I71112ef8072591aac4a93075623ca31c8851a960
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/114217
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
|
|
Allow tegra_iovmm_alloc_client() to take struct device * instead of
const char *name w/ __tegra_iovmm_alloc_client(). This is necessary to
support IOVMM and IOMMU simultaneously.
Change-Id: I18df5001bfe0ece8f9f15b636eb11def9f228dfb
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/114215
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
|
|
Change-Id: I2a5f0c9305bd53c42df181556d97efa5d6792ad7
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/106500
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bo Yan <byan@nvidia.com>
|
|
Debug allocations data for iovmm has carvout allocations
also and vice versa. Fixed it to show only iovmm for iovmm
and carveout for carveout.
Add missing "FLAGS" print for iovmm allocations.
Change-Id: I0fd271be24d0d2d3924ca473fd32476776fdcf84
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/104246
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
Reviewed-by: Jon Mayo <jmayo@nvidia.com>
|
|
Use nvmap.h include file from kernel/include instead of mach-tegra/include.
Bug 854182
Change-Id: I385657f45483f2696e99fc2b4ed934fef5decd1e
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/102720
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Jon Mayo <jmayo@nvidia.com>
|
|
Add config option to enable/disable nvmap page pools.
Change-Id: I873e81a675fecd768534d4ce03c2f8fdd3c6a063
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/100424
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
|
|
Refactor page pool code.
Add page pool support for all memory types.
Add support to enable/disable page pools.
Add support to allow configuring page pool size.
Change-Id: I07c79004542efdd5909547928b3aa5d470e38909
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/91914
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Jon Mayo <jmayo@nvidia.com>
|
|
Changing page attributes and cache maintenance reduces
performance in applications doing runtime reallocations.
Keep pool of UC & WC pages to avoid expensive
operations when doing allocations.
bug 865816
(refactored initial changes from Kirill and added shrinker
notification handling)
Change-Id: I43206efb1adc750ded672bfe074e0648f2f9490b
Signed-off-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-on: http://git-master/r/87532
Reviewed-by: Donghan Ryu <dryu@nvidia.com>
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Tested-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
|
|
Use correct type.
Change-Id: Iec3ec1624e4f14c36f02fbee6d838f4b2d188569
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/84947
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
|
|
Change-Id: Ic5f70fe468651dab331059feda438d6e8871ef8a
Signed-off-by: Colin Patrick McCabe <cmccabe@nvidia.com>
Reviewed-on: http://git-master/r/68741
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
|
|
Allocation flags provide useful information about how allocations
were created.
Expose allocation flags in allocation debugfs list.
bug 882345
bug 889003
Reviewed-on: http://git-master/r/61517
(cherry picked from commit 5100f1b09584f079a1547f65ac8b49b27df73292)
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Change-Id: I2aed0150fe76791550daa1f37d1b5a238af50e1e
Reviewed-on: http://git-master/r/64939
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Tested-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Rebase-Id: R38ccb94a1d41207b1fe92d345e626963912b2379
|
|
Reviewed-on: http://git-master/r/51026
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Tested-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
(cherry picked from commit 060055eea2082adb6da4cf27462ff699fdf2b4e9)
Change-Id: I9244dff21e9cd62b14f97bfb5a5349eb46f73847
Reviewed-on: http://git-master/r/56107
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Tested-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: Rd3f4f8799a3acad27234422b39a4def0a439e3f3
|
|
Change-Id: Ic7fab1575312afd27430b46a50aa6f7a7de0ac4e
Reviewed-on: http://git-master/r/53843
Reviewed-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Tested-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Rebase-Id: R44abb981f52475b4431975c5d38611e7ca93b59a
|
|
Make kernel boot up with CONFIG_TEGRA_IOVMM=n
(cherry picked from commit f10b613bbd27b8a5f25cbbaebecfe50fd9c0be3f)
Change-Id: I980d762bd9feac3881e00015e6db753ae36e79f9
Reviewed-on: http://git-master/r/54509
Tested-by: Hiro Sugawara <hsugawara@nvidia.com>
Reviewed-by: Dan Willemsen <dwillemsen@nvidia.com>
Rebase-Id: R0c90a78e3a9e8d5316faa1fee012f6269ee82035
|
|
- Use 0x0ff00000 (last 1MB of IOVM).
- For Tegra3 A01, use 0xeff00000.
Original-Change-Id: Ieb21d2bf38158171b97434e04ede7417823b3603
Reviewed-on: http://git-master/r/37742
Reviewed-by: Kaz Fukuoka <kfukuoka@nvidia.com>
Tested-by: Kaz Fukuoka <kfukuoka@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
Rebase-Id: Ra84e71077c8c74fdbd977fc838e74f132f173018
|
|
Original-Change-Id: Ic50111924d7adf7838926cb534bbf841b7e8003a
Reviewed-on: http://git-master/r/45358
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: R6399e487b817aa16c39e685d3f7860d1eefa8b09
|
|
bug 827505
Original-Change-Id: If6d4fd137b72c3a08bf8fb1094d8dd31ab361f1c
Reviewed-on: http://git-master/r/31633
Reviewed-by: Kaz Fukuoka <kfukuoka@nvidia.com>
Tested-by: Kaz Fukuoka <kfukuoka@nvidia.com>
Reviewed-by: Frank Thomas Bourgeois <fbourgeois@nvidia.com>
Rebase-Id: R563eaa39b299d89133f7a9fb4b1c7129e8ddb71e
|
|
Adding a build time CONFIG option to enable forcing of conversion
of non-IRAM CarveOut memory allocation requests to IOVM requests.
Default is "y" to force the conversion.
Each forced conversion is reported to console.
Allocation alignments larger than page size for IOVM are enabled.
Single page CarveOut allocations are converted to system memory.
CarveOut memory reservation has been removed for aruba, cardhu,
and enterprise.
Original-Change-Id: I3a598431d15b92ce853b3bec97be4b583d021264
Reviewed-on: http://git-master/r/29849
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: R602260e283f721b7e0f0d802092516ff68c068fe
|
|
Original-Change-Id: I158d2be97c795313e7e74ce9fb4ec0bdc7d95496
Reviewed-on: http://git-master/r/27559
Tested-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-by: Hiro Sugawara <hsugawara@nvidia.com>
Reviewed-by: Jin Qian <jqian@nvidia.com>
Reviewed-by: Kaz Fukuoka <kfukuoka@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Original-Change-Id: I0ff198daa548ed2837f7fb1794013bf0adf7e5a1
Rebase-Id: R83df5f3b5104183bfe774d8eed8ce94427c9b7fc
|
|
Bug 764354
Original-Change-Id: I807433ff825bed1fe91ce0cf50a2b3691c64ef0a
Reviewed-on: http://git-master/r/12227
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Tested-by: Scott Williams <scwilliams@nvidia.com>
Original-Change-Id: I3da91a438f98f2f51618446ce024f3fefd726a19
Rebase-Id: Rb1717b1f80aaf0242f4da555ce16c06946b7d072
|
|
nvmap's debugfs had a bad format so it was
very difficult to read the outputs. this commit
fixes it and added total allocation size along
with it
Bug 813891
Original-Change-Id: I6e3165b3ff917d9510d39f1e35b8e6b59c086592
Reviewed-on: http://git-master/r/27349
Reviewed-by: Donghan Ryu <dryu@nvidia.com>
Tested-by: Donghan Ryu <dryu@nvidia.com>
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Rebase-Id: R0959d3648c3fe8e1d0a4bbbca5e79f0b5a744c6f
|
|
video:tegra:nvmap: Clean whole L1 instead of cleaning by MVA
For large allocations, cleaning each page of the allocation can
take a significant amount of time. If an allocation that nvmap needs
to clean or invalidate out of the cache is significantly larger than
the cache, just flush the entire cache by set/ways.
bug 788967
Reviewed-on: http://git-master/r/19354
(cherry picked from commit c01c12e63b1476501204152356867aeb5091fb80)
tegra:video:nvmap: optimize cache_maint operation.
optimize cache_maint operation for carveout and heap memories.
flush carveout memory allocations on memory free.
Bug 761637
Reviewed-on: http://git-master/r/21205
Conflicts:
drivers/video/tegra/nvmap/nvmap_dev.c
drivers/video/tegra/nvmap/nvmap_heap.c
drivers/video/tegra/nvmap/nvmap_ioctl.c
(cherry picked from commit 731df4df5e895e1d4999359d6d5939fc2095f883)
tegra:video:nvmap: optimize cache flush for system heap pages.
optimize cache flush for pages allocated from system heap.
Bug 788187
Reviewed-on: http://git-master/r/21687
(cherry picked from commit 3f318911ad91410aed53c90494210e2b8f74308b)
Original-Change-Id: Ia7b90ba0b50acfef1b88dd8095219c51733e027f
Reviewed-on: http://git-master/r/23465
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Tested-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Rebase-Id: R04f618f88ed1d2c7a680d51a8c5113f42de3f667
|
|
Enabled mutex debugging reavealed potential deadlocks
introduced with compaction.
Handle spin lock replaced with mutex. Heap functions cannot be
protected with spinlock because they call kernel slab allocation
functions which cannot be called from atomic context.
nvmap_client ref_lock is also replaced with mutex. Otherwise we
cannot access heap parameters protected by mutex nvmap_handle lock.
Extra locking for handle->owner removed.
bug 793364
Original-Change-Id: I635ce9ebf259dd7bf8802457567f93b7be5795ea
Reviewed-on: http://git-master/r/19850
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Tested-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Daniel Willemsen <dwillemsen@nvidia.com>
Rebase-Id: Reaa132703e278d75371d5e2b25426794aa8e0e4e
|
|
There are places where nvmap_free_handle_id is called
when interrupts are disabled and mutex cannot be used as
nvmap handle lock.
Original-Change-Id: Icc220fe627c08f21c677d936a54f70c818dc8e8c
Reviewed-on: http://git-master/r/19489
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: Rb5a58e8226ad14340d1acae007d6b632960fae16
|
|
bug 762482
Original-Change-Id: Ifadebc1b0c4eb0df89e179091acca0ff6e527e56
Reviewed-on: http://git-master/r/15743
Reviewed-by: Kirill Artamonov <kartamonov@nvidia.com>
Tested-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: R639e7f09f44c8919bd57a16a577b87db91160555
|
|
-Add a module param to enable/disable carveout killer
-Fix race condition in code to wait for something to free memory
after firing carveout killer
-Fix the check for current so we always compare task->group_leaders
Change-Id: Ie030978827dce6b0fbbfa1db0d80e4abe59eaa51
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
-Modify the carveout killer to only kill tasks with lower priorities
than the one that's trying to allocate
-After delivering a sigkill to a task, wait for something to exit and
cleanup before retrying the allocation
Change-Id: If62b6ed008a73fc3c347ff26735a83eee284909e
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
No need to maintain a reference to the task struct if the client
is a kernel thread. In this case just set the task to NULL.
Change-Id: Ica4785388932f6b298eeb0da04b78b0e1cdc3a44
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
This change attempts to reclaim carveout memory by killing
other carveout users when an allocation fails. Processes
are killed in order of priority from lowest to highest, and then
from largest to smallest users.
Change-Id: Iee8a6f36269bc8165d691000a153dbf9f4337775
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
Fix the way the total number of carveout allocations is
managed per client.
Change-Id: I3e12e2a98a74cafc1f4c51a48e3c3c549e930160
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
This modifies the api to allow the user to specify a name
for their clients. This will allow the system to track
allocations from the kernel by name.
Change-Id: I44aad209bc54e72126be3bebfe416b30291d206c
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
Moves the file tracking clients to debugfs
Add a debugfs file to track the list of allocations per client
Change-Id: I2bb683e3ac0599fa05d962c79ef0b7cbd0007d75
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
In the current implementation handles hold references to a
client and clients hold references to their handles. As a
result when a process terminates it's handles can't be cleaned
up and we leak memory. Instead only hold references to handles
from clients.
Change-Id: Iba699e740a043deaf0a78b13b4ea01544675078f
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
This patch adds the ability to track the total allocations in a
given carveout heap by client. It also adds a sys file to print
the list of clients, their pids and their respective carveout sizes
Change-Id: I34fc97c3be574d2bd30d7594320ff05f6e13c476
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com>
|
|
a >= vs > error when checking the operating region of the read and
write ioctls was causing failures when reading the last byte of a handle.
the super-user node (knvmap) wasn't registered correctly due to a cut-
and-paste error, and the regular user node was assigned super-user
priveleges.
noref pinning wasn't correctly validating that the specified handle
existed before pinning it, which caused the reference count for the
handle to become imbalanced on a subsequent unpin
Change-Id: I9985b85023705b00389a53fb962c3b60d62da6b8
Signed-off-by: Gary King <gking@nvidia.com>
|
|
nvmap provides an interface for user- and kernel-space clients to
allocate and access memory "handles" which can be pinned to enable
the memory to be shared with DMA devices on the system, and may
also be mapped (using caller-specified cache attributes) so that
they are directly accessible by the CPU.
the memory handle object gives clients a common API to allocate from
multiple types of memory: platform-reserved physically contiguous
"carveout" memory, physically contiguous (order > 0) OS pages,
or physically discontiguous order-0 OS pages that can be remapped
into a contiguous region of the DMA device's virtual address space
through the tegra IOVMM subsystem.
unpinned and unmapped memory handles are relocatable at run-time
by the nvmap system. handles may also be shared between multiple
clients, allowing (for example) a window manager and its client
applications to directly share framebuffers
Change-Id: Ie8ead17fe7ab64f1c27d922b1b494f2487a478b6
Signed-off-by: Gary King <gking@nvidia.com>
|