summaryrefslogtreecommitdiff
path: root/drivers/video/tegra/nvmap/nvmap_priv.h
AgeCommit message (Collapse)Author
2017-03-20video: tegra: nvmap: fix time-of-check,time-of-use vulnerabilitySri Krishna chowdary
Validate the region specified by offset and size before performing the operations like nvmap_prot_handle, nvmap_cache_maint and nvmap_handle_mk*. This validation of offset and size once the values are in local variables guarantees that even though user space changes the values in user buffers, nvmap continues to perform operations with the contents that are validated. Fixes Google Bug 34113000. bug 1862379 Change-Id: Ief81887b3d94b49f3dcf4d2680d9d7b257c54092 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Signed-off-by: Bibek Basu <bbasu@nvidia.com> Reviewed-on: http://git-master/r/1298712 (cherry picked from commit f45441da608d8015ece73d253d4bdb48863f99e2) Reviewed-on: http://git-master/r/1310316 (cherry picked from commit 57367ab3be5f1c52dd6b885f114ae90dfce5a363) Reviewed-on: http://git-master/r/1319910 GVS: Gerrit_Virtual_Submit
2016-04-28video: tegra: nvmap: Add ref count in nvmap_vma_listSri Krishna chowdary
Add ref count to prevent invalid vma removal from the h->vmas list and also allow addition of a different vma which also has same nvmap_vma_priv as vm_private_data into the h->vmas list. Both cases are allowed in valid usage of nvmap_vma_open/nvmap_vma_close. Bug 200164002 Change-Id: Ifc4d281dd91e1d072a9a3ee85e925040bd65a6bc Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Signed-off-by: Bryan Wu <pengw@nvidia.com> Reviewed-on: http://git-master/r/1133708 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
2015-05-12video: tegra: nvmap: clean cache during page allocations into page poolKrishna Reddy
Clean cache during page allocations into page pool to avoid cache clean overhead at the time of allocation. Increase page pool refill size to 1MB from 512KB. Bug 1539190 Change-Id: I6c45782e54879541f7b518bbbb016383b24e376b Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/453197 Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com> GVS: Gerrit_Virtual_Submit Reviewed-by: Alex Waterman <alexw@nvidia.com> Reviewed-by: Michael I Gold <gold@nvidia.com> [ccross: moved on top of background zeroing patches, replaced atomic with bool since it has to be protected by a lock anyways] Signed-off-by: Colin Cross <ccross@android.com> Reviewed-on: http://git-master/r/664676 Reviewed-on: http://git-master/r/736430 Tested-by: Alex Waterman <alexw@nvidia.com>
2015-05-12nvmap: page pools: replace background allocator with background zeroerColin Cross
The background allocator rapidly becomes useless once the system has filled memory with cached pages. It refuses to allocate when free memory < 128MB (which it always is, the kernel is aiming to keep very little free), and freed pages are not returned to the page pool when zero_memory=1. Remove the background allocator completely, and instead return freed memory to the page pool in a separate list to be zeroed in the background. This results in a self-balancing pool of memory available to graphics, and reduces presure on the kernel's page allocator. If the pool grows too big it will get reduced by the shrinker. If it gets too small, the next allocation will fall back to the page allocator, and then later return those pages to the pool. Before this change incremental page pool hit rate reported by /d/nvmap/pagepool/page_pool_hits vs. /d/nvmap/pagepool/page_pool_misses goes to 0% after boot. After this change it is near 100% for small app launches and 75% for larger app launches. Change-Id: I4bc914498d7d0369eef9e621bda110d9b8be90b2 Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/664674 GVS: Gerrit_Virtual_Submit Reviewed-on: http://git-master/r/736428 Reviewed-by: Alex Waterman <alexw@nvidia.com> Tested-by: Alex Waterman <alexw@nvidia.com>
2015-05-12nvmap: page pool: fix background threadColin Cross
Fix a race condition in the background allocator where wake_up_process could be called just before set_current_state changed the state to TASK_INTERRUPTIBLE, causing the thread not to wake. Use a waitqueue instead. Also make the background allocator nicer by marking it freezable so it doesn't compete with suspend, and setting it SCHED_IDLE so it only runs when no other threads want to run. Change-Id: If95da005bb1fc4c9b5e802d40730803a57057fe1 Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/664673 GVS: Gerrit_Virtual_Submit Reviewed-on: http://git-master/r/736427 Reviewed-by: Alex Waterman <alexw@nvidia.com> Tested-by: Alex Waterman <alexw@nvidia.com>
2015-05-12nvmap: combine two methods of requesting zeroed memoryColin Cross
Combine CONFIG_NVMAP_FORCE_ZEROED_USER_PAGES and the zero_memory modparam into a single option by forcing zero_memory=1 when CONFIG_NVMAP_FORCE_ZEROED_USER_PAGES is set, and always using zero_memory to decided whether to zero or not. Change-Id: I9ce0106cfaea950bd9494b697916fbc2a03329ea Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/664672 GVS: Gerrit_Virtual_Submit Reviewed-on: http://git-master/r/736426 Reviewed-by: Alex Waterman <alexw@nvidia.com> Tested-by: Alex Waterman <alexw@nvidia.com>
2015-05-12nvmap: page pools: hide internal lock from nvmap_handle.cColin Cross
The internal pool lock is exported so that nvmap_handle can lock it, call a *_locked function, and then unlock it. Provide a version of the *_locked functions that takes the lock, remove the lock and unlock helpers, and make the lock private to the pools again. Change-Id: I5a99753058e43161d50a0c61f3a984655cd7cd35 Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/664671 GVS: Gerrit_Virtual_Submit Reviewed-on: http://git-master/r/736425 Reviewed-by: Alex Waterman <alexw@nvidia.com> Tested-by: Alex Waterman <alexw@nvidia.com>
2015-05-12nvmap: replace page pool array with a listColin Cross
struct page already has a list node that is available to use by whoever got the page with alloc_page. Use it to keep the free pages in the pool in a list instead of a circular buffer in an array. Change-Id: I0377633be7d620b59daf34799bd4ebc5fd9443fb Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/664670 GVS: Gerrit_Virtual_Submit Reviewed-on: http://git-master/r/736424 Reviewed-by: Alex Waterman <alexw@nvidia.com> Tested-by: Alex Waterman <alexw@nvidia.com>
2014-09-26video: tegra: nvmap: cleanup redundant functionsManeet Singh
Removed redundant function unmarshal_user_id() and replaced it with unmarshal_user_handle() which it internally calls without any other changes. Bug 1553082 Change-Id: I7d998966c593f11a3322b0503ef11311fc1ae5e7 Signed-off-by: Maneet Singh <mmaneetsingh@nvidia.com> Reviewed-on: http://git-master/r/498103 (cherry picked from commit 4880b6c2bdf5b10e4a71b5b79e7878343b9e7e3b) Reviewed-on: http://git-master/r/538985 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Winnie Hsu <whsu@nvidia.com>
2014-08-08video: tegra: nvmap: track handle's vma listManeet Singh
Patch includes following nvmap changes: - added "pid" field in nvmap_vma_list so now looking at handle's vma list, we can say which vma belongs to which process. - sorted handle's vma list in ascending order of handle offsets. Bug 1529015 Change-Id: Ide548e2d97bab8072461c11c9b8865ab4aa01989 Signed-off-by: Maneet Singh <mmaneetsingh@nvidia.com> Reviewed-on: http://git-master/r/448493 (cherry picked from commit 37132fa461d23552b805e32d268acd14b27588c3) Reviewed-on: http://git-master/r/448576 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Winnie Hsu <whsu@nvidia.com>
2014-08-08video: tegra: nvmap: remove carveout commit accountingKrishna Reddy
Remove obsolete carveout commit accounting. Bug 1529015 Change-Id: If7e25ca2ef43c036558c9c9ead5f67ee8eef6b42 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/426734 (cherry picked from commit c1ddad1b13332386857f9f2964aa8968094e7e8c) Reviewed-on: http://git-master/r/448554 Tested-by: Winnie Hsu <whsu@nvidia.com>
2014-08-08video: tegra: nvmap: unify debug stats codeKrishna Reddy
Unify debug stats code for iovmm and carveouts. Bug 1529015 Change-Id: Ief800587870845ed6f566cb7afb2c91000d177ca Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/426733 (cherry picked from commit 0c0f7e5a9ef459d7940cc66af0a00321bb54d389) Reviewed-on: http://git-master/r/448536 Tested-by: Winnie Hsu <whsu@nvidia.com>
2014-08-08video: tegra: nvmap: don't count shared memory in fullKrishna Reddy
Don't count shared memory in full in iovmm stats. Add SHARE field to allocations info to show how many processes are sharing the handle. Update few comments in the code. Remove unnecessary iovm_commit accounting. Bug 1529015 Change-Id: I49650bf081d652dedc7139f639aae6da06965ecd Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/426274 (cherry picked from commit 92d47c10fbf7a315d4c953bafb71ee23032b7f65) Reviewed-on: http://git-master/r/448533 Tested-by: Winnie Hsu <whsu@nvidia.com>
2014-08-08video: tegra: nvmap: add handle share count to debug statsKrishna Reddy
handle share count provides info on how many processes are sharing the handle. IOW, how many processes are holding a ref on handle. Update the comments for umap/kmap_count. Bug 1529015 Change-Id: I9f543ebf51842dad6ecd3bfeb7480496c98963be Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/426302 (cherry picked from commit 244c41508be0705cc232942b9403e17611f63e45) Reviewed-on: http://git-master/r/448521 Tested-by: Maneet Maneet Singh <mmaneetsingh@nvidia.com>
2014-06-20video: tegra: nvmap: track vma for all handlesKrishna Reddy
Clean up the code related to mmap and handle nvmap_map_info_caller_ptr failures graciously. Initilize h->vmas at right place. Add sanity checks in nvmap_vma_open/_close. Bug 1519700 Change-Id: Iede355b8a500a787992fcb23a72cf334a737ec49 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/419168 (cherry picked from commit c18228c5de319d74f68deff9c5d402ca17b64e95) Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/426092 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2014-06-09arm64: nvmap: warn on use of 'uncached' nvmap memory type on arm64Rich Wiley
Change-Id: I7c6cd5596e6a900996cd8f732285486c69586a6d Signed-off-by: Rich Wiley <rwiley@nvidia.com> Reviewed-on: http://git-master/r/418487 Reviewed-by: Mandar Padmawar <mpadmawar@nvidia.com> Tested-by: Mandar Padmawar <mpadmawar@nvidia.com>
2014-05-30video: tegra: nvmap: Use clean for DenverRich Wiley
On ARMv7 using a full flush was necessary for two reasons. One, the zeroed pages needed to be written to DRAM or the PoU so that HW devices could not read the memory out from under the CPU's cache. And two, the cache lines needed to be invalidated so that if userspace immediately mapped the memory as something other than cached they would not get invalid hits in the cache. However, for denver, the risk of invalid hits in the cache due to lines being left as valid is no more. Thus we only need to ensure the zeroed memory is past the PoU. For this a clean will suffice. Also on denver invalidates are extremely expensive. Therefor removing any invalidates that are not necessary will greatly help performance. Change-Id: Ie952f5db62e1312562ef2ef9c61ba3e37e3efc76 Signed-off-by: Alex Waterman <alexw@nvidia.com> Signed-off-by: Rich Wiley <rwiley@nvidia.com> Reviewed-on: http://git-master/r/407856 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2014-05-13video: tegra: nvmap: fix nvmap_handle_mkSri Krishna chowdary
Loop must run from start_page to end_page and not 2*start_page to end_page. Bug 1444151 Change-Id: I71e35abcf9bda97694d1398d130c0ebf1440a62f Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/404263 (cherry picked from commit 203ffb9cec0131be3d84c92abcc7cf5fe5504e6a) Reviewed-on: http://git-master/r/408169 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2014-04-23video: tegra: nvmap: implement reserve pages ioctlSri Krishna chowdary
On reserve request: - mark the pages within specified range as reserved for device On release request: - reset the reserved bits for reserved pages within range specified in each handle bug 1444151 Change-Id: Ia565bdb86db9f345004a12861138a26a9b6fc243 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/376602 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-23video: tegra: nvmap: add new flag CACHE_SYNCKrishna Reddy
Allocating with this flag allows user space to perform cache maintenance for the dirty pages alone. Rename PHYSICALLY_CONTIGUOUS flag to PHYS_CONTIG. Change-Id: I56d2bce395a46357409048455ab82c2b59f65436 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/398348
2014-04-23video: tegra: nvmap: track cpu dirty pagesSri Krishna chowdary
A page is marked cpu dirty when a user space access faults or when kernel access happens through dma_buf_kmap. A page is marked cpu clean when it is written back. This operation is valid only for write back i.e., when buffer is sync for device. For kernel accesses through kmap/vmap it is assumed that client calls begin/end_cpu_access to take care of sync for device/cpu immediately after access and hence we do not track such dirty pages. enable tracking cpu dirty pages, if auto_cache_sync is enabled for handle. Bug 1444151 Change-Id: Ic03478702fad67b6d7682eb20404e3f88114d81c Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/391816 Reviewed-by: Automatic_Commit_Validation_User
2014-04-23video: tegra: nvmap: fix bug in nvmap_handle_mkKrishna Reddy
Change-Id: Idfe61da2f80ed2c4ba8262480d72c2344081f482 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/398083 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com>
2014-04-23video: tegra: nvmap: add helper functions for handle clean and reservedKrishna Reddy
Add helper functions for handle clean, reserved and unreserved. Bug 1444151 Change-Id: Ie14d60555200b73928c6b829a719783139c36b4f Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/397453
2014-04-23video: tegra: nvmap: fix incorrect zapping of handlesKrishna Reddy
Bug 1444151 Change-Id: I0d385d36bd62f6e60c6ad1296a60fc997bc13e80 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/397420
2014-04-23video: tegra: nvmap: vma need to be tracked during mmapKrishna Reddy
Add missing vma tracking during mmap. Bug 1444151 Change-Id: Id0485237c96e97a3e1da55f14e5533a48fd2bde7 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/397390
2014-04-23video: tegra: nvmap: fix incorrect nvmap_page_dirty and nvmap_page_reservedKrishna Reddy
Fix incorrect checks in nvmap_page_dirty() and nvmap_page_reserved(). Remove redundant braces. Bug 1444151 Change-Id: I66fa2d77816dca8630c4d0b59931237aae4fe9e7 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/396512 Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com> Tested-by: Sri Krishna Chowdary <schowdary@nvidia.com>
2014-04-23video: tegra: nvmap: select range on cache list opsSri Krishna chowdary
Selecting the memory region within each handle can help reduce cache maintenance overhead when performing cache ops on list of handles especially when a few pages each from a long list of handles are modified. Bug 1373180 Change-Id: I2e81eba4703d3b311e7e82798cc23fbd7adf6303 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/392388 GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-23video: tegra: nvmap: Define cpu dirty pages countSri Krishna chowdary
Add ndirty field in struct nvmap_pages to count of cpu dirty pages in pgalloc.pages, in particular the user space accessed pages. Defined as atomic_t to take care of any race conditions. This can help skip cache maint on handles which do not have any dirty pages. Bug 1444151 Change-Id: Id12e94b9aac071f2c92da74e2171866362442d57 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/383794 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-23video: tegra: nvmap: support clean only dirty option on write backSri Krishna chowdary
Modify cache maintenance operation to specify option whether to clean only dirty pages or all pages from cache within the specified range. This can help unnecessary overhead of cleaning pages which are not dirty as well. clean only dirty option is preferred for user space accesses since dirty pages can be tracked only for those. For kernel, since there is no mechanism to find out the dirty pages, this option cannot be used. Bug 1444151 Change-Id: Ib6df78a3fb926d1327f25bf9d1320a743381b2d9 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/395353 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-10video: tegra: nvmap: validate handle during duplicationVandana Salve
Validate params passed for read/write operations on mem handle. bug 1454693 Change-Id: I7bba81f0478d358d92ba461728ea098b1e0ff52b Signed-off-by: Vandana Salve <vsalve@nvidia.com> Reviewed-on: http://git-master/r/391904 Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2014-04-09video: tegra: nvmap: helpers to track page typeSri Krishna chowdary
Why does nvmap need to track page type? - To optimize cache operations - To support reserve operation The following page types need to be identified for achieving these goals. Reserved: A page can be reserved for DMA by any device. Such a page should not be accessed by cpu. BIT(1) of each struct page * keeps track of such pages. CPU dirty: It is futile to cache clean a page which is not accessed by cpu before a DMA transfer. BIT(0) of each struct page * helps to identify only pages that are possibly modified by CPU since last cache clean. Only such pages can be cleaned. This patch adds helpers for tracking dirty and reserved pages. Bug 1444151 Change-Id: Ide15b308aab20bc218f49ed9e0306ce339455575 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/391383 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-09video: tegra: nvmap: rename altalloc/altfreeKrishna Reddy
rename altalloc/altfree to nvmap_altalloc/nvmap_altfree and expose to all nvmap files. Change-Id: I6a5d75563f4c5cffe162157fee4bafe1241b690b Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/394291 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com> Tested-by: Sri Krishna Chowdary <schowdary@nvidia.com>
2014-04-09video: tegra: nvmap: Move page pool debugfsAlex Waterman
Move the pagepool debugfs nodes from the iovmm directory to their own new directory. This avoids unnecessary clutter and promotes logical organization of the nvmap debugfs nodes. Change-Id: I337e3341f84cf4c2fded21503bd8b63885520500 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/391598 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-09video: tegra: nvmap: Remove old ZP supportAlex Waterman
Remove the old foreground page zeroing support. This is replaced by using the background zeroed page pool support instead. If page pools are empty clearing happens in the allocation context. Bug 1371433 Bug 1392833 Change-Id: I10660f9e7fc0d3d917b97239dccfda35d8895202 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/383931 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-08video: tegra: nvmap: Fix defaults for page poolAlex Waterman
Change-Id: I1feaa87dca9e9a9c395b1fd107248f27e68d6639 Signed-off-by: Alex Waterman <alexw@nvidia.com>
2014-04-07video: tegra: nvmap: Add background allocatorAlex Waterman
Add a background kernel thread that allocates memory into the page pool. This allows zeroed pages to be allocated directly into the page pool. In turn this avoids that overhead in the allocation path itself (for page pool hits at least). Pre-flushing the pages being placed into the page pool will be implemented later. Bug 1371433 Bug 1392833 Change-Id: I78ae5cc6321f711c9433530aec17d8ae6d581c47 Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/383930 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-07video: tegra: nvmap: cache op for list of handlesSri Krishna chowdary
Modify nvmap_flush_cache_list to perform any cache maintenance op as requested. Also, rename the function to nvma_do_cache_maint_list Bug 1444151 Change-Id: Ic8458301506b0073a990c9a044980de29f4764b2 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/391817 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-04-07video: tegra: nvmap: support zap VA->PA mappingsSri Krishna chowdary
Zapping VA->PA mapping is a necessary step to perform in order to track if any page is accessed by CPU. Unless zapping is performed a user space access would not result in a page fault and there is no way nvmap can track it otherwise. bug 1444151 Change-Id: Iee90b2c9db339d596f3d061aeaa1d2c210367778 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/377861 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2014-04-01video: tegra: nvmap: clean dcache on write back opSri Krishna chowdary
inner_clean_cache_all has been flushing dcache although only clean is sufficient. Hence, call __clean_dcache_all to clean the dcache and avoid overhead of invalidation. Bug 1489709 Change-Id: Iaba8c117a96cd2fafceb3d9584e97ad4f9702eb3 Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com> Reviewed-on: http://git-master/r/390072 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-03-31video: tegra: nvmap: check for valid handleVandana Salve
Add check for valid mem handle bug 1454693 Change-Id: If10c27c5249799f027a17cd820439f967f2881c2 Signed-off-by: Vandana Salve <vsalve@nvidia.com> Reviewed-on: http://git-master/r/389792 Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2014-03-31video: tegra: nvmap: treat writecombine requests as dmacoherentKrishna Reddy
NvMap always allocates DRAM or IRAM memory. It never allocates real Device memory. From Arm v8 onwards writecombine memory is real writecombine, which is of less performace for DRAM accesses and unnecessary for DRAM. Treat writecombine requests as dmacoherent, which is normal memory non-cacheable. Change-Id: I5c19a5f0f033eec3680bf6b9db1cbbfb02090f3d Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/379702 GVS: Gerrit_Virtual_Submit Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com>
2014-03-28video: tegra: nvmap: support deferred fd recycleKrishna Reddy
Support deferred fd recycle. This would help in debugging client issues. Bug 1468931 Change-Id: I2f8604399448ed85f2eb0cd43c695e2e72a25aae Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/387863
2014-03-27video: tegra: nvmap: remove stale arg passed to __nvmap_do_cache_maintKrishna Reddy
remove unnecessary macros representing stale arg as well. Change-Id: I7bc5ff5ea4be9b261b3eed7b9396ff35b4cba8e3 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/383975
2014-03-26video: tegra: nvmap: Fix page pool resizingAlex Waterman
Page pool resizing was broken in a previous page pool patch. This patch fixes the sysfs page pool knobs. Change-Id: Icfffe75a78c7a07f1e402e35ba9b3ba3def4b70f Signed-off-by: Alex Waterman <alexw@nvidia.com> Reviewed-on: http://git-master/r/385795 Reviewed-by: Krishna Reddy <vdumpa@nvidia.com> Tested-by: Krishna Reddy <vdumpa@nvidia.com>
2014-03-24video: tegra: nvmap: avoid exposing private struct's to clientsKrishna Reddy
avoid exposing nvmap private structs to clients. remove unnecessary export symbol's. remove unnecessary config protection in nvmap_priv.h. Change-Id: I7f4195a6156c8521e65194f8492c92fba972d7c9 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/385717 Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit
2014-03-24video: tegra: nvmap: remove obsolete function nvmap_client_to_deviceKrishna Reddy
Change-Id: Id5233b7b2b9e291bbfcf3d1b555be99284dc85ff Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/385716 Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit
2014-03-24video: tegra: nvmap: remove unused variable globalKrishna Reddy
Remove unused variable global from handle struct. Force the size of flags to fixed length. Change-Id: I2d1d60318936cc3dca35ed94ff9d561da145b1db Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/385715 Reviewed-by: Automatic_Commit_Validation_User Reviewed-by: Alex Waterman <alexw@nvidia.com> GVS: Gerrit_Virtual_Submit
2014-03-21video: tegra: nvmap: remove redundant tlb flushKrishna Reddy
remove redundant tlb flush and nvmap_flush_tlb_kernel_range function as well. Change-Id: I93988e22a95cebc4cc0735b49db931f56892eb64 Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/383973
2014-03-20video: tegra: nvmap: remove unused member variablesKrishna Reddy
Remove unused member variables and unnecessary extern declarations. Change-Id: Iefa5afb7041b413a83d4e549aa7f90bfcb026d2e Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/383976
2014-03-20video: tegra: nvmap: remove variable nvmap_pdevKrishna Reddy
nvmap_pdev is not really necessary. Change-Id: I277b94e3289143561e7b94c316e22c1116c373aa Signed-off-by: Krishna Reddy <vdumpa@nvidia.com> Reviewed-on: http://git-master/r/383974