Commit Graph

15 Commits

Author SHA1 Message Date
Norman Feske 76db3b9c06 base: retire 'Native_config'
This commit moves the parameters of the stack area to the base-internal
header 'stack_area.h'.

Issue #1832
2016-03-08 17:00:54 +01:00
Norman Feske 7f73e5e879 base: hide internals of the Thread API
This patch moves details about the stack allocation and organization
the base-internal headers. Thereby, I replaced the notion of "thread
contexts" by "stacks" as this term is much more intuitive. The fact that
we place thread-specific information at the bottom of the stack is not
worth introducing new terminology.

Issue #1832
2016-03-07 12:34:46 +01:00
Norman Feske 9e6f3be806 sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.

In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.

The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.

After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.

The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.

As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.

Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).

Fixes #1887
2016-02-26 11:36:55 +01:00
Stefan Kalkowski ccb968ff7d safeguard the synchronized allocator template
* Move the Synced_interface from os -> base
* Align the naming of "synchronized" helpers to "Synced_*"
* Move Synced_range_allocator to core's private headers
* Remove the raw() and lock() members from Synced_allocator and
  Synced_range_allocator, and re-use the Synced_interface for them
* Make core's Mapped_mem_allocator a friend class of Synced_range_allocator
  to enable the needed "unsafe" access of its physical and virtual allocators

Fix #1697
2015-09-30 12:20:39 +02:00
Adrian-Ken Rueegsegger c2ff0ae9d4 Minor cleanup fixes
- Fix spelling errors
- Remove extra semicolons
- Remove extra spaces

Fixes #1650
2015-08-21 11:00:59 +02:00
Norman Feske acd7a2f1c4 sel4: reserve virt page for main-thread IPC buffer 2015-05-26 09:40:00 +02:00
Norman Feske 5a05521e0f sel4: bootstrap of init and page-fault handling 2015-05-26 09:40:00 +02:00
Norman Feske f19f454ae5 sel4: move core to a libaray, add boot_modules.s 2015-05-26 09:39:59 +02:00
Norman Feske 6ffba0e473 sel4: IPC implementation 2015-05-26 09:39:59 +02:00
Norman Feske ff46d02c48 sel4: capability lifetime management 2015-05-26 09:39:59 +02:00
Norman Feske f24b212e47 sel4: core-local thread creation 2015-05-26 09:39:58 +02:00
Norman Feske e6ad346e24 sel4: management of core's virtual memory 2015-05-26 09:39:57 +02:00
Norman Feske 1f5cfef64e sel4: switch to core's custom cspace layout 2015-05-26 09:39:57 +02:00
Norman Feske de8bfb37f9 sel4: initialization of core's allocators 2015-05-26 09:39:57 +02:00
Norman Feske 633f335171 sel4: core skeleton 2015-05-26 09:39:57 +02:00