Instead of mapping all physical memory 1:1 into core/kernel's address space,
this commit limits the 1:1 mapping to the binary image, and I/O memory
regions used by the kernel only. All subsequent memory accesses of core
are done by mapping the corresponding memory on demand, and not necessarily
1:1.
This commit has several side effects:
The page table code had to be revisited completely. The kernel inserts no
longer anything into the page tables, apart from the initial translations
to have the core/kernel image available when enabling the MMU. The page
tables and higher level translation tables are no longer named Tlb, but
Translation_table instead. There is no indirection class required to define
the translation tables of a concrete SoC, the appropriated ARM specifier
is sufficient.
The ability to map core's memory the same way like it's done for all other
protection domains, makes a special treatment of core's threads (no context
area) obsolete.
Ref #567 (partly solves it)
Fix#723Fix#1068
Removes the generic processor broadcast function call. By now, that call
was used for cross processor TLB maintance operations only. When core/kernel
gets its memory mapped on demand, and unmapped again, the previous cross
processor flush routine doesn't work anymore, because of a hen-egg problem.
The previous cross processor broadcast is realized using a thread constructed
by core running on top of each processor core. When constructing threads in
core, a dataspace for its thread context is constructed. Each constructed
RAM dataspace gets attached, zeroed out, and detached again. The detach
routine requires a TLB flush operation executed on each processor core.
Instead of executing a thread on each processor core, now a thread waiting
for a global TLB flush is removed from the scheduler queue, and gets attached
to a TLB flush queue of each processor. The processor local queue gets checked
whenever the kernel is entered. The last processor, which executed the TLB
flush, re-attaches the blocked thread to its scheduler queue again.
To ease uo the above described mechanism, a platform thread is now directly
associated with a platform pd object, instead of just associate it with the
kernel pd's id.
Ref #723
The new core-internal 'Address_space' interface enables cores RM service
to flush mappings of a PD in which a given 'Rm_client' thread resides.
Prior this patch, each platform invented their own way to flush mappings
in the respective 'rm_session_support.cc' implementation. However, those
implementations used to deal poorly with some corner cases. In
particular, if a PD session was destroyed prior a RM session, the RM
session would try to use no longer existing PD session. The new
'Address_space' uses the just added weak-pointer mechanism to deal with
this issue.
Furthermore, the generic 'Rm_session_component::detach' function has
been improved to avoid duplicated unmap operations for platforms that
implement the 'Address_space' interface. Therefore, it is related to
issue #595. Right now, this is OKL4 only, but other platforms will follow.
This patch reflects eventual allocation errors in a more specific way to
the caller of 'alloc_aligned', in particular out-of-metadata and
out-of-memory are considered as different conditions.
Related to issue #526.