Age | Commit message (Collapse) | Author |
|
reimplement interrupt controllers following the kernel coding
conventions
propogate set_irq_wake signal from gpio chip to the primary chip
use IRQ_WAKEUP status to control masking and unmasking of interrupts
when entering low-power modes; non-wakeup interrupts are disabled
using with disable_irq, and are re-enabled after wakeup with enable_irq
ODM kit wakeup pads are configured with enable_irq_wake during the
board initialization
move context save & restore for the GPIO and interrupt controllers out
of power-context-t2.c and into their respective drivers; no distinction
is made between LP0 and LP1 context currently; if there is enough of a
performance difference to warrant reintroducing it, this can be done at
a later time.
delete now-deprecated NVIDIA GPIO IRQ code
bug 656008
Change-Id: I68f98f2442c50a93a7ad9cdfef87b630e8c132a9
Reviewed-on: http://git-master/r/931
Tested-by: Gary King <gking@nvidia.com>
Reviewed-by: Venkata (Muni) Anda <vanda@nvidia.com>
Reviewed-by: Gary King <gking@nvidia.com>
|
|
pre-existing NvULowestBitSet must be exported for upcoming channel-in-kernel change
NvOsGetProcessInfo is useful for debugging. Implemented and exported.
Change-Id: I9265626bc4496589a71c6b3517af44d8571a2c2e
Reviewed-on: http://git-master/r/828
Reviewed-by: Acorn Pooley <apooley@nvidia.com>
Tested-by: Acorn Pooley <apooley@nvidia.com>
Reviewed-by: Gary King <gking@nvidia.com>
|
|
previously, the task of managing RM-managed memory handles was split
between nvos (OS page allocation), the RM (heap management for
carveout & IRAM heaps, and handle life-time management), nvreftrack
(abnormal process termination) and nvmap (user-space read/write/map
of memory handles). this resulted in an opaque system that was wasteful
of kernel virtual address space, didn't support CPU cache attributes for
kernel mappings and couldn't fully unwind leaked handles (e.g., if the
application leaked a pinned handle the memory might never be reclaimed).
nvmap is now a full re-implementation of the RM memory manager, unifying all
of the functionality from nvreftrack, nvos, nvmap and nvrm into one
driver used by both user and kernel-space clients.
add configs to control paranoid operation. when paranoid is enabled,
every handle reference passed into the kernel is verified to actually
have been created by nvmap; furthermore, handles which are not global
(the GET_ID ioctl has not been called for it) will fail validation
if they are referenced by any process other than the one which created
them, or a super-user process (opened via /dev/knvmap).
each file descriptor maintains its own table of nvmap_handle_ref
references, so the handle value returned to each process is unique;
furthermore, nvmap_handle_ref objects track how many times they have
been pinned, to ensure that processes which abnormally terminate with
pinned handles can be unwound correctly.
as a compile-time option, fully-unpinned handles which require IOVMM
mappings may be stored in a segmented (by size) MRU (most-recently
unpinned) eviction cache; if IOVMM space is over-committed across
multiple processes, a pin operation may reclaim any or all of the IOVMM
areas in the MRU cache. MRU is used as the eviction policy since
graphics operations frequently operate cyclically, and the least-recently
used entry may be needed almost immediately if the higher-level client
starts (e.g.) rendering the next frame.
introduce a concept of "secure" handles. secure handles may only
be mapped into IOVMM space, and when unpinned their mapping in IOVMM
space will be zapped immediately, to prevent malicious processes from
being able to access the handle.
expose carveout heap attributes for each carveout heap in sysfs,
under the nvmap device with sub-device name heap-<heap name>
* total size
* free size
* total block count
* free block count
* largest block
* largest free block
* base address
* name
* heap usage bitmask
carveout heaps may be split at run-time, if sufficient memory is available
in the heap. the split heap can be (should be) assigned a different name
and usage bitmask than the original heap. this allows a large initial
carveout to be split into smaller carveouts, to reserve sections of carveout
memory for specific usages (e.g., camera and/or video clients).
add a split entry in the sysfs tree for each carveout heap, to support
run-time splitting of carveout heaps into reserved regions. format is:
<size>,<usage>,<name>
* size should be parsable with memparse (suffixes k/K and m/M are legal)
* usage is the new heap's usage bitmask
* name is the name of the new heap (must be unique)
carveout heaps are managed using a first-fit allocator with an explicit
free list, all blocks are kept in a dynamically-sized array (doubles
in size every time all blocks are exhausted); to reduce fragmentation
caused by allocations with different alignment requirements, the
allocator will compare left-justifying and right-justifying the
allocation within the first-fit block, and choose the justification
that results in the largest remaining free block (this is particularly
important for 1M-aligned split heaps).
other code which duplicated functionality subsumed by this changelist
(RM memory manager, NvOs carveout command line parser, etc.) is deleted;
implementations of the RM memory manager on top of nvmap are provided
to support backwards compatibility
bug 634812
Change-Id: Ic89d83fed31b4cadc68653d0e825c368b9c92f81
Reviewed-on: http://git-master/r/590
Reviewed-by: Gary King <gking@nvidia.com>
Tested-by: Gary King <gking@nvidia.com>
|
|
with the change to allocate messages < MAX_SIZE on the stack
inside the dispatcher, there were actually 2 on-stack copies
of the message being maintained: one in the dispatcher, and
one in the RM transport code itself.
the transport code copied the message in order to prepend
a 3-word header; however, since the entire message was then
copied into a memory handle to send it to the AVP, the
internal copy is unnecessary; the message header can be
written directly to the memory handle and then the buffer
provided by the dispatcher can be copied directly into
the memory handle
Change-Id: I3a748192d7e45445bc821456e170670cc3fb0e98
|
|
|
|
Added spinlock primitives to NvOs.
Converted PMC scratch registers access mutexes to spin-locks.
Change-Id: I862baf4d3a7a8fcdf1e8552356805afd4ac897c3
|
|
Change-Id: Iba87dc237961781bd301a782e0160d4addea0ab6
|
|
Change-id: I8225d867ee48c
|
|
dsb() is inadequate to maintain coherence with DMA devices, since it
only guarantees that writes have been flushed from the CPU's store
buffers; store buffers in a non-DMA-coherent outer cache will not
be flushed.
Change-Id: Ia6082beb5d39c8bef7450e674a7077c5159268a3
|
|
the initial cache writeback for highmem pages in the nvos page allocator
had accidentally shadowed the variable used to store the kernel address,
so the page was never unmapped.
this caused a quick exhaustion of the kmap area during Android bootup.
Change-Id: I6e27b7b7f75162652f32296784b53cbdbdc502c4
|
|
nvmap-allocated memory is used primarily by DMA devices, and the cost
of L2 maintenance generally greatly outweighs the benefit of caching
the (mostly streaming) accesses.
a reserved region of the kernel's virtual address space (NVMAP_BASE to
NVMAP_BASE+NVMAP_SIZE) is used by nvmap as a temporary mapping area for
all operations (cache maintenance, read, write) on memory handles is
perfomed by mapping each page into the nvmap aperture with the same
cache attributes as other active mappings.
this change greatly improves the performance of drawing the drawer
and web pages in Android, since the primary bottleneck in both cases
has been the L2 cache maintenance operations (which no longer exist)
additionally, when cache writebacks are requested on large regions
(currently defined as >= 3 pages), the entire L1 data cache is flushed,
to avoid the loop costs of per-line operations.
Change-Id: I37e07c86eb316811f63e7200d52667debf4b7aa7
|
|
rewrite page-allocation routines to integrate with highmem correctly,
and to allow for inner cached outer non-cached mappings of the regions
change the allocation pool from GFP_ATOMIC to kernel & highmem pages,
and disable the debug printout when page allocation fails
bug 641308
Change-Id: I82100c980bc0b2aa390aa6b1fb93d337f98f134a
|
|
if a kernel mutex lock or semaphore wait is interrupted because the task
is being frozen, the correct action to take is to call try_to_freeze,
similar to wait_event_freezable
-ERESTARTSYS is not returned to higher-level software in the event of a
resume; the assumption is that the resume process will ensure that any
freezable task is resumed in a state where the NvOsSemaphore will be
signalable, and that the Wait operation should only return once the
semaphore is actually signalled.
bug 645292
Change-Id: Ia9c6c2426d889851437d93506ce33cb23cee5e8c
|
|
The new NvBootArg_Warmboot argument allows the bootloader to pass
a memory region containing the warm bootloader to the kernel.
Change-Id: Ib8c2a45c64ae9e61240152e91c5cbcff421ada21
|
|
split the nvos interrupt implementation into 2 parts: one which returns
an error code when the thread is awakened due to a pending signal, and
one which loops in the kernel until the semaphore operation completes.
the user-land stub needs to immediately return control to the application
in the event of a pending signal, to ensure that the signal is dealt
with before re-attempting to wait on the semaphore.
bug: 642544
Change-Id: I0b233c9483d67cc6d25285d0dc3eddafa8502500
|
|
the bootloader may place carveout in a location that is not exactly at
the top of physical memory (if, for example, the bootloader does not
support mapping all the way to the top of memory), and the existing code
in the RM initialization did not allow for this (or for multiple carveout
apertures).
add a new command-line parameter (nvmem=) to position the carveout aperture
arbitrarily in the physical address space.
Change-Id: Ib750c4c038916a21b9fece490efbe6c953da09de
|
|
allow passing of shmoo data as a preserved memory handle in carveout
reinitialize scaled clocks when resuming from LP0
add first reference indicator to module clock state
Change-Id: Ifea8901b6924b9992885bd10fa07991fe55a06de
|
|
sema_init wasn't called until after the first call to wake_up_process,
resulting in a race condition between the kernel thread down'ing the
semaphore and the main thread initializing it.
move sema_init to before kthread_create to eliminate this race.
|
|
PhysicalMemMap was returning the virtual address of the start of the aperture,
not the virtual address of the requested physical address.
|
|
export the full set of NvOs and NvRefTrack symbols to kernel modules
|
|
NvOsPhysicalMemMap and NvOsPhysicalMemUnmap were using NV_APERTURES and
needed to be updated to tegra_apertures(x)
Change-Id: I1a0f9e380c4d78edcdca5382a597ec62f0ad7f05
|
|
brings over the NvOs kernel implementation and user-land stub from perforce
|