2015-05-01 20:03:08 +02:00
|
|
|
/*
|
|
|
|
* \brief Platform interface implementation
|
|
|
|
* \author Norman Feske
|
|
|
|
* \date 2015-05-01
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2017-02-20 13:23:52 +01:00
|
|
|
* Copyright (C) 2015-2017 Genode Labs GmbH
|
2015-05-01 20:03:08 +02:00
|
|
|
*
|
|
|
|
* This file is part of the Genode OS framework, which is distributed
|
2017-02-20 13:23:52 +01:00
|
|
|
* under the terms of the GNU Affero General Public License version 3.
|
2015-05-01 20:03:08 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
/* Genode includes */
|
|
|
|
#include <base/sleep.h>
|
|
|
|
#include <base/thread.h>
|
2016-06-17 15:15:48 +02:00
|
|
|
#include <base/log.h>
|
2017-08-09 16:28:44 +02:00
|
|
|
#include <trace/source_registry.h>
|
2017-08-11 18:26:29 +02:00
|
|
|
#include <util/xml_generator.h>
|
2015-05-01 20:03:08 +02:00
|
|
|
|
|
|
|
/* core includes */
|
2016-09-15 16:08:33 +02:00
|
|
|
#include <boot_modules.h>
|
2017-12-11 13:05:23 +01:00
|
|
|
#include <core_log.h>
|
2015-05-01 20:03:08 +02:00
|
|
|
#include <platform.h>
|
|
|
|
#include <map_local.h>
|
2015-05-03 02:55:42 +02:00
|
|
|
#include <cnode.h>
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
#include <untyped_memory.h>
|
2017-08-09 16:28:44 +02:00
|
|
|
#include <thread_sel4.h>
|
2015-05-02 17:48:07 +02:00
|
|
|
|
2016-03-03 17:57:29 +01:00
|
|
|
/* base-internal includes */
|
2016-06-17 15:15:48 +02:00
|
|
|
#include <base/internal/globals.h>
|
2016-03-03 17:57:29 +01:00
|
|
|
#include <base/internal/stack_area.h>
|
|
|
|
|
2017-08-09 16:28:44 +02:00
|
|
|
/* seL4 includes */
|
|
|
|
#include <sel4/benchmark_utilisation_types.h>
|
|
|
|
|
2015-05-01 20:03:08 +02:00
|
|
|
using namespace Genode;
|
|
|
|
|
|
|
|
static bool const verbose_boot_info = true;
|
|
|
|
|
2017-06-22 18:47:02 +02:00
|
|
|
/**
|
|
|
|
* Platform object is set if object is currently in construction.
|
|
|
|
* Avoids deadlocks due to nested calls of Platform() constructor caused by
|
|
|
|
* platform_specific(). May happen if meta data allocator of phys_alloc runs
|
|
|
|
* out of memory.
|
|
|
|
*/
|
|
|
|
static Platform * platform_in_construction = nullptr;
|
2015-05-01 20:03:08 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Memory-layout information provided by the linker script
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* virtual address range consumed by core's program image */
|
|
|
|
extern unsigned _prog_img_beg, _prog_img_end;
|
|
|
|
|
|
|
|
|
|
|
|
/****************************************
|
|
|
|
** Support for core memory management **
|
|
|
|
****************************************/
|
|
|
|
|
2015-09-17 14:16:59 +02:00
|
|
|
bool Mapped_mem_allocator::_map_local(addr_t virt_addr, addr_t phys_addr,
|
|
|
|
unsigned size)
|
2015-05-01 20:03:08 +02:00
|
|
|
{
|
2017-06-22 18:47:02 +02:00
|
|
|
if (platform_in_construction)
|
|
|
|
Genode::warning("need physical memory, but Platform object not constructed yet");
|
|
|
|
|
2015-05-05 00:15:50 +02:00
|
|
|
size_t const num_pages = size / get_page_size();
|
|
|
|
|
|
|
|
Untyped_memory::convert_to_page_frames(phys_addr, num_pages);
|
|
|
|
|
2017-06-22 18:47:02 +02:00
|
|
|
return map_local(phys_addr, virt_addr, num_pages, platform_in_construction);
|
2015-05-01 20:03:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-07-15 11:05:09 +02:00
|
|
|
bool Mapped_mem_allocator::_unmap_local(addr_t virt_addr, addr_t phys_addr,
|
|
|
|
unsigned size)
|
2015-05-01 20:03:08 +02:00
|
|
|
{
|
2016-07-13 13:33:10 +02:00
|
|
|
if (!unmap_local(virt_addr, size / get_page_size()))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
Untyped_memory::convert_to_untyped_frames(phys_addr, size);
|
|
|
|
|
|
|
|
return true;
|
2015-05-01 20:03:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/************************
|
|
|
|
** Platform interface **
|
|
|
|
************************/
|
|
|
|
|
2015-05-11 18:55:30 +02:00
|
|
|
void Platform::_init_unused_phys_alloc()
|
|
|
|
{
|
2016-07-05 15:58:46 +02:00
|
|
|
/* the lower physical ram is kept by the kernel and not usable to us */
|
|
|
|
_unused_phys_alloc.add_range(0x100000, 0UL - 0x100000);
|
2015-05-11 18:55:30 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-05-05 00:15:50 +02:00
|
|
|
void Platform::_init_allocators()
|
|
|
|
{
|
|
|
|
/* interrupt allocator */
|
2016-07-05 15:58:46 +02:00
|
|
|
_irq_alloc.add_range(0, 256);
|
2015-05-03 02:55:42 +02:00
|
|
|
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
/*
|
|
|
|
* XXX allocate intermediate CNodes for organizing the untyped pages here
|
|
|
|
*/
|
|
|
|
|
2017-06-22 18:47:02 +02:00
|
|
|
/* turn remaining untyped memory ranges into untyped pages */
|
2017-07-04 14:17:38 +02:00
|
|
|
_initial_untyped_pool.turn_into_untyped_object(Core_cspace::TOP_CNODE_UNTYPED_4K,
|
2017-06-22 18:47:02 +02:00
|
|
|
[&] (addr_t const phys, addr_t const size, bool const device) {
|
|
|
|
/* register to physical or iomem memory allocator */
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
|
2017-06-22 18:47:02 +02:00
|
|
|
addr_t const phys_addr = trunc_page(phys);
|
|
|
|
size_t const phys_size = round_page(phys - phys_addr + size);
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
|
2017-06-22 18:47:02 +02:00
|
|
|
if (device)
|
|
|
|
_io_mem_alloc.add_range(phys_addr, phys_size);
|
|
|
|
else
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
_core_mem_alloc.phys_alloc().add_range(phys_addr, phys_size);
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
|
2017-06-22 18:47:02 +02:00
|
|
|
_unused_phys_alloc.remove_range(phys_addr, phys_size);
|
|
|
|
});
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* From this point on, we can no longer create kernel objects from the
|
|
|
|
* '_initial_untyped_pool' because the pool is empty.
|
|
|
|
*/
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* core's maximum virtual memory area */
|
|
|
|
_unused_virt_alloc.add_range(_vm_base, _vm_size);
|
2016-06-30 17:34:24 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* remove core image from core's virtual address allocator */
|
|
|
|
addr_t const modules_start = reinterpret_cast<addr_t>(&_boot_modules_binaries_begin);
|
|
|
|
addr_t const core_virt_beg = trunc_page((addr_t)&_prog_img_beg),
|
|
|
|
core_virt_end = round_page((addr_t)&_prog_img_end);
|
|
|
|
addr_t const image_elf_size = core_virt_end - core_virt_beg;
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
_unused_virt_alloc.remove_range(core_virt_beg, image_elf_size);
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
_core_mem_alloc.virt_alloc().add_range(modules_start, core_virt_end - modules_start);
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* remove initial IPC buffer from core's virtual address allocator */
|
|
|
|
seL4_BootInfo const &bi = sel4_boot_info();
|
|
|
|
addr_t const core_ipc_buffer = reinterpret_cast<addr_t>(bi.ipcBuffer);
|
|
|
|
addr_t const core_ipc_bsize = 4096;
|
|
|
|
_unused_virt_alloc.remove_range(core_ipc_buffer, core_ipc_bsize);
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* remove sel4_boot_info page from core's virtual address allocator */
|
|
|
|
addr_t const boot_info_page = reinterpret_cast<addr_t>(&bi);
|
|
|
|
addr_t const boot_info_size = 4096 + bi.extraLen;
|
|
|
|
_unused_virt_alloc.remove_range(boot_info_page, boot_info_size);
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* preserve stack area in core's virtual address space */
|
|
|
|
_unused_virt_alloc.remove_range(stack_area_virtual_base(),
|
|
|
|
stack_area_virtual_size());
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2015-05-05 00:15:50 +02:00
|
|
|
if (verbose_boot_info) {
|
2017-06-12 12:41:38 +02:00
|
|
|
typedef Hex_range<addr_t> Hex_range;
|
2017-06-22 18:47:02 +02:00
|
|
|
log("virtual address layout of core:");
|
2017-06-12 12:41:38 +02:00
|
|
|
log(" overall ", Hex_range(_vm_base, _vm_size));
|
|
|
|
log(" core image ", Hex_range(core_virt_beg, image_elf_size));
|
|
|
|
log(" ipc buffer ", Hex_range(core_ipc_buffer, core_ipc_bsize));
|
|
|
|
log(" boot_info ", Hex_range(boot_info_page, boot_info_size));
|
|
|
|
log(" stack area ", Hex_range(stack_area_virtual_base(),
|
|
|
|
stack_area_virtual_size()));
|
2015-05-05 00:15:50 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void Platform::_switch_to_core_cspace()
|
|
|
|
{
|
2017-06-22 18:47:02 +02:00
|
|
|
Cnode_base const initial_cspace(Cap_sel(seL4_CapInitThreadCNode),
|
|
|
|
CONFIG_WORD_SIZE);
|
2015-05-03 02:55:42 +02:00
|
|
|
|
|
|
|
/* copy initial selectors to core's CNode */
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadTCB));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadVSpace));
|
|
|
|
_core_cnode.move(initial_cspace, Cnode_index(seL4_CapIRQControl)); /* cannot be copied */
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapASIDControl));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadASIDPool));
|
2017-07-04 14:17:38 +02:00
|
|
|
/* XXX io port not available on ARM, causes just a kernel warning XXX */
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapIOPort));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapBootInfoFrame));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadIPCBuffer));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapDomain));
|
2015-05-05 00:15:50 +02:00
|
|
|
|
|
|
|
/* replace seL4_CapInitThreadCNode with new top-level CNode */
|
2016-07-07 13:49:35 +02:00
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::top_cnode_sel()),
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cnode_index(seL4_CapInitThreadCNode));
|
2015-05-03 02:55:42 +02:00
|
|
|
|
|
|
|
/* copy untyped memory selectors to core's CNode */
|
|
|
|
seL4_BootInfo const &bi = sel4_boot_info();
|
|
|
|
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
/*
|
|
|
|
* We have to move (not copy) the selectors for the initial untyped ranges
|
|
|
|
* because some of them are already populated with kernel objects allocated
|
|
|
|
* via '_initial_untyped_pool'. For such an untyped memory range, the
|
|
|
|
* attempt to copy its selector would result in the following error:
|
|
|
|
*
|
|
|
|
* <<seL4: Error deriving cap for CNode Copy operation.>>
|
|
|
|
*/
|
2015-05-03 02:55:42 +02:00
|
|
|
for (unsigned sel = bi.untyped.start; sel < bi.untyped.end; sel++)
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
_core_cnode.move(initial_cspace, Cnode_index(sel));
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* move selectors of core image */
|
|
|
|
addr_t const modules_start = reinterpret_cast<addr_t>(&_boot_modules_binaries_begin);
|
|
|
|
addr_t const modules_end = reinterpret_cast<addr_t>(&_boot_modules_binaries_end);
|
|
|
|
addr_t virt_addr = (addr_t)(&_prog_img_beg);
|
|
|
|
|
|
|
|
for (unsigned sel = bi.userImageFrames.start;
|
|
|
|
sel < bi.userImageFrames.end;
|
|
|
|
sel++, virt_addr += get_page_size()) {
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* remove mapping to boot modules, no access required within core */
|
|
|
|
if (modules_start <= virt_addr && virt_addr < modules_end) {
|
2017-06-22 18:47:02 +02:00
|
|
|
long err = _unmap_page_frame(Cap_sel(sel));
|
|
|
|
if (err != seL4_NoError)
|
|
|
|
error("unmapping boot modules ", Hex(virt_addr), " error=", err);
|
2017-06-12 12:41:38 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* insert cap for core image */
|
|
|
|
_core_cnode.move(initial_cspace, Cnode_index(sel));
|
|
|
|
}
|
2015-05-11 18:55:30 +02:00
|
|
|
|
2015-05-03 02:55:42 +02:00
|
|
|
/* copy statically created CNode selectors to core's CNode */
|
2016-07-07 13:49:35 +02:00
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::top_cnode_sel()));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::core_pad_cnode_sel()));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::core_cnode_sel()));
|
|
|
|
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::phys_cnode_sel()));
|
2015-05-03 02:55:42 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Construct CNode hierarchy of core's CSpace
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* insert 3rd-level core CNode into 2nd-level core-pad CNode */
|
2016-07-07 13:49:35 +02:00
|
|
|
_core_pad_cnode.copy(initial_cspace, Cnode_index(Core_cspace::core_cnode_sel()),
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cnode_index(0));
|
2015-05-03 02:55:42 +02:00
|
|
|
|
|
|
|
/* insert 2nd-level core-pad CNode into 1st-level CNode */
|
2016-07-07 13:49:35 +02:00
|
|
|
_top_cnode.copy(initial_cspace, Cnode_index(Core_cspace::core_pad_cnode_sel()),
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cnode_index(Core_cspace::TOP_CNODE_CORE_IDX));
|
2015-05-03 02:55:42 +02:00
|
|
|
|
|
|
|
/* insert 2nd-level phys-mem CNode into 1st-level CNode */
|
2016-07-07 13:49:35 +02:00
|
|
|
_top_cnode.copy(initial_cspace, Cnode_index(Core_cspace::phys_cnode_sel()),
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cnode_index(Core_cspace::TOP_CNODE_PHYS_IDX));
|
|
|
|
|
|
|
|
/* insert 2nd-level untyped-pages CNode into 1st-level CNode */
|
2017-07-04 14:17:38 +02:00
|
|
|
_top_cnode.copy(initial_cspace, Cnode_index(Core_cspace::untyped_cnode_4k()),
|
|
|
|
Cnode_index(Core_cspace::TOP_CNODE_UNTYPED_4K));
|
|
|
|
|
|
|
|
/* insert 2nd-level untyped-pages CNode into 1st-level CNode */
|
|
|
|
_top_cnode.move(initial_cspace, Cnode_index(Core_cspace::untyped_cnode_16k()),
|
|
|
|
Cnode_index(Core_cspace::TOP_CNODE_UNTYPED_16K));
|
2015-05-03 02:55:42 +02:00
|
|
|
|
|
|
|
/* activate core's CSpace */
|
|
|
|
{
|
2017-12-07 21:02:29 +01:00
|
|
|
seL4_CNode_CapData const null_data = { { 0 } };
|
|
|
|
seL4_CNode_CapData const guard = seL4_CNode_CapData_new(0, CONFIG_WORD_SIZE - 32);
|
2015-05-03 02:55:42 +02:00
|
|
|
|
|
|
|
int const ret = seL4_TCB_SetSpace(seL4_CapInitThreadTCB,
|
|
|
|
seL4_CapNull, /* fault_ep */
|
2017-06-22 18:47:02 +02:00
|
|
|
Core_cspace::top_cnode_sel(),
|
2017-12-07 21:02:29 +01:00
|
|
|
guard.words[0],
|
|
|
|
seL4_CapInitThreadPD,
|
|
|
|
null_data.words[0]);
|
2015-05-03 02:55:42 +02:00
|
|
|
|
2016-06-17 15:15:48 +02:00
|
|
|
if (ret != seL4_NoError)
|
|
|
|
error(__FUNCTION__, ": seL4_TCB_SetSpace returned ", ret);
|
2015-05-03 02:55:42 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cap_sel Platform::_init_asid_pool()
|
|
|
|
{
|
|
|
|
return Cap_sel(seL4_CapInitThreadASIDPool);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-05-11 18:55:30 +02:00
|
|
|
void Platform::_init_rom_modules()
|
|
|
|
{
|
|
|
|
seL4_BootInfo const &bi = sel4_boot_info();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Slab allocator for allocating 'Rom_module' meta data.
|
|
|
|
*/
|
|
|
|
static long slab_block[4096];
|
|
|
|
static Tslab<Rom_module, sizeof(slab_block)>
|
2019-01-30 17:53:16 +01:00
|
|
|
rom_module_slab(core_mem_alloc(), &slab_block);
|
2015-05-11 18:55:30 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate unused range of phys CNode address space where to make the
|
|
|
|
* boot modules available.
|
|
|
|
*/
|
|
|
|
void *out_ptr = nullptr;
|
|
|
|
size_t const modules_size = (addr_t)&_boot_modules_binaries_end
|
|
|
|
- (addr_t)&_boot_modules_binaries_begin + 1;
|
|
|
|
|
|
|
|
Range_allocator::Alloc_return const alloc_ret =
|
|
|
|
_unused_phys_alloc.alloc_aligned(modules_size, &out_ptr, get_page_size_log2());
|
|
|
|
|
2016-05-11 18:21:47 +02:00
|
|
|
if (alloc_ret.error()) {
|
2016-06-17 15:15:48 +02:00
|
|
|
error("could not reserve phys CNode space for boot modules");
|
2015-05-11 18:55:30 +02:00
|
|
|
struct Init_rom_modules_failed { };
|
|
|
|
throw Init_rom_modules_failed();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate frame frame selector used to back the boot modules
|
|
|
|
*/
|
|
|
|
addr_t const unused_range_start = (addr_t)out_ptr;
|
|
|
|
addr_t const unused_first_frame_sel = unused_range_start >> get_page_size_log2();
|
|
|
|
addr_t const modules_start = (addr_t)&_boot_modules_binaries_begin;
|
|
|
|
addr_t const modules_core_offset = modules_start
|
|
|
|
- (addr_t)&_prog_img_beg;
|
|
|
|
addr_t const modules_first_frame_sel = bi.userImageFrames.start
|
|
|
|
+ (modules_core_offset >> get_page_size_log2());
|
|
|
|
|
2016-09-15 16:08:33 +02:00
|
|
|
Boot_modules_header const *header = &_boot_modules_headers_begin;
|
2015-05-11 18:55:30 +02:00
|
|
|
for (; header < &_boot_modules_headers_end; header++) {
|
|
|
|
|
|
|
|
/* offset relative to first module */
|
|
|
|
addr_t const module_offset = header->base - modules_start;
|
|
|
|
addr_t const module_offset_frames = module_offset >> get_page_size_log2();
|
|
|
|
size_t const module_size = round_page(header->size);
|
|
|
|
addr_t const module_frame_sel = modules_first_frame_sel
|
|
|
|
+ module_offset_frames;
|
|
|
|
size_t const module_num_frames = module_size >> get_page_size_log2();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Destination frame within phys CNode
|
|
|
|
*/
|
|
|
|
addr_t const dst_frame = unused_first_frame_sel + module_offset_frames;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Install the module's frame selectors into phys CNode
|
|
|
|
*/
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cnode_base const initial_cspace(Cap_sel(seL4_CapInitThreadCNode), 32);
|
2015-05-11 18:55:30 +02:00
|
|
|
for (unsigned i = 0; i < module_num_frames; i++)
|
2017-06-12 12:41:38 +02:00
|
|
|
_phys_cnode.move(initial_cspace, Cnode_index(module_frame_sel + i),
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cnode_index(dst_frame + i));
|
2015-05-11 18:55:30 +02:00
|
|
|
|
2017-05-22 10:38:02 +02:00
|
|
|
log("boot module '", (char const *)header->name, "' "
|
|
|
|
"(", header->size, " bytes)");
|
2015-05-11 18:55:30 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Register ROM module, the base address refers to location of the
|
|
|
|
* ROM module within the phys CNode address space.
|
|
|
|
*/
|
|
|
|
Rom_module * rom_module = new (rom_module_slab)
|
|
|
|
Rom_module(dst_frame << get_page_size_log2(), header->size,
|
|
|
|
(const char*)header->name);
|
|
|
|
|
|
|
|
_rom_fs.insert(rom_module);
|
|
|
|
}
|
2017-08-11 18:26:29 +02:00
|
|
|
|
|
|
|
/* export x86 platform specific infos via 'platform_info' ROM */
|
|
|
|
|
|
|
|
unsigned const pages = 1;
|
|
|
|
addr_t const rom_size = pages << get_page_size_log2();
|
|
|
|
void *virt_ptr = nullptr;
|
|
|
|
const char *rom_name = "platform_info";
|
|
|
|
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
addr_t const phys_addr = Untyped_memory::alloc_page(ram_alloc());
|
2017-08-11 18:26:29 +02:00
|
|
|
Untyped_memory::convert_to_page_frames(phys_addr, pages);
|
|
|
|
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
if (region_alloc().alloc_aligned(rom_size, &virt_ptr, get_page_size_log2()).error()) {
|
2017-08-11 18:26:29 +02:00
|
|
|
error("could not setup platform_info ROM - region allocation error");
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
Untyped_memory::free_page(ram_alloc(), phys_addr);
|
2017-08-11 18:26:29 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
addr_t const virt_addr = reinterpret_cast<addr_t>(virt_ptr);
|
|
|
|
|
|
|
|
if (!map_local(phys_addr, virt_addr, pages, this)) {
|
|
|
|
error("could not setup platform_info ROM - map error");
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
region_alloc().free(virt_ptr);
|
|
|
|
Untyped_memory::free_page(ram_alloc(), phys_addr);
|
2017-08-11 18:26:29 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
Genode::Xml_generator xml(reinterpret_cast<char *>(virt_addr),
|
|
|
|
rom_size, rom_name, [&] ()
|
|
|
|
{
|
2018-03-27 13:40:53 +02:00
|
|
|
|
2017-11-18 14:17:11 +01:00
|
|
|
if (!bi.extraLen)
|
|
|
|
return;
|
|
|
|
|
|
|
|
addr_t const boot_info_page = reinterpret_cast<addr_t>(&bi);
|
|
|
|
addr_t const boot_info_extra = boot_info_page + 4096;
|
|
|
|
|
|
|
|
seL4_BootInfoHeader const * element = reinterpret_cast<seL4_BootInfoHeader *>(boot_info_extra);
|
2019-05-06 14:13:58 +02:00
|
|
|
seL4_BootInfoHeader const * const last = reinterpret_cast<seL4_BootInfoHeader const *>(boot_info_extra + bi.extraLen);
|
2017-11-18 14:17:11 +01:00
|
|
|
|
|
|
|
for (seL4_BootInfoHeader const *next = nullptr;
|
|
|
|
(next = reinterpret_cast<seL4_BootInfoHeader const *>(reinterpret_cast<addr_t>(element) + element->len)) &&
|
|
|
|
next <= last && element->id != SEL4_BOOTINFO_HEADER_PADDING;
|
|
|
|
element = next)
|
|
|
|
{
|
2018-04-13 13:44:42 +02:00
|
|
|
if (element->id == SEL4_BOOTINFO_HEADER_X86_TSC_FREQ) {
|
|
|
|
struct tsc_freq {
|
|
|
|
uint32_t freq_mhz;
|
|
|
|
} __attribute__((packed));
|
|
|
|
if (sizeof(tsc_freq) + sizeof(*element) != element->len) {
|
|
|
|
error("unexpected tsc frequency format");
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
tsc_freq const * boot_freq = reinterpret_cast<tsc_freq const *>(reinterpret_cast<addr_t>(element) + sizeof(* element));
|
|
|
|
|
2019-04-17 12:51:39 +02:00
|
|
|
xml.node("kernel", [&] () { xml.attribute("name", "sel4"); });
|
2018-04-13 13:44:42 +02:00
|
|
|
xml.node("hardware", [&] () {
|
|
|
|
xml.node("features", [&] () {
|
2018-09-26 11:00:01 +02:00
|
|
|
#ifdef CONFIG_VTX
|
|
|
|
xml.attribute("vmx", true);
|
|
|
|
#else
|
2018-04-13 13:44:42 +02:00
|
|
|
xml.attribute("vmx", false);
|
2018-09-26 11:00:01 +02:00
|
|
|
#endif
|
2018-04-13 13:44:42 +02:00
|
|
|
});
|
|
|
|
xml.node("tsc", [&] () {
|
|
|
|
xml.attribute("freq_khz" , boot_freq->freq_mhz * 1000UL);
|
|
|
|
});
|
|
|
|
});
|
|
|
|
}
|
|
|
|
|
2017-11-18 14:17:11 +01:00
|
|
|
if (element->id == SEL4_BOOTINFO_HEADER_X86_FRAMEBUFFER) {
|
|
|
|
struct mbi2_framebuffer {
|
|
|
|
uint64_t addr;
|
|
|
|
uint32_t pitch;
|
|
|
|
uint32_t width;
|
|
|
|
uint32_t height;
|
|
|
|
uint8_t bpp;
|
|
|
|
uint8_t type;
|
|
|
|
} __attribute__((packed));
|
2017-08-11 18:26:29 +02:00
|
|
|
|
2017-11-18 14:17:11 +01:00
|
|
|
if (sizeof(mbi2_framebuffer) + sizeof(*element) != element->len) {
|
|
|
|
error("unexpected framebuffer information format");
|
|
|
|
continue;
|
|
|
|
}
|
2017-08-11 18:26:29 +02:00
|
|
|
|
2017-11-18 14:17:11 +01:00
|
|
|
mbi2_framebuffer const * boot_fb = reinterpret_cast<mbi2_framebuffer const *>(reinterpret_cast<addr_t>(element) + sizeof(*element));
|
|
|
|
|
|
|
|
xml.node("boot", [&] () {
|
|
|
|
xml.node("framebuffer", [&] () {
|
|
|
|
xml.attribute("phys", String<32>(Hex(boot_fb->addr)));
|
|
|
|
xml.attribute("width", boot_fb->width);
|
|
|
|
xml.attribute("height", boot_fb->height);
|
|
|
|
xml.attribute("bpp", boot_fb->bpp);
|
|
|
|
xml.attribute("type", boot_fb->type);
|
|
|
|
xml.attribute("pitch", boot_fb->pitch);
|
|
|
|
});
|
|
|
|
});
|
|
|
|
}
|
2017-09-01 13:24:43 +02:00
|
|
|
|
2017-11-18 14:17:11 +01:00
|
|
|
if (element->id != SEL4_BOOTINFO_HEADER_X86_ACPI_RSDP)
|
|
|
|
continue;
|
2017-09-01 13:24:43 +02:00
|
|
|
|
2017-11-18 14:17:11 +01:00
|
|
|
xml.node("acpi", [&] () {
|
2017-09-01 13:24:43 +02:00
|
|
|
|
|
|
|
struct Acpi_rsdp
|
|
|
|
{
|
|
|
|
uint64_t signature;
|
|
|
|
uint8_t checksum;
|
|
|
|
char oem[6];
|
|
|
|
uint8_t revision;
|
|
|
|
uint32_t rsdt;
|
|
|
|
uint32_t length;
|
|
|
|
uint64_t xsdt;
|
|
|
|
uint32_t reserved;
|
|
|
|
|
|
|
|
bool valid() const {
|
|
|
|
const char sign[] = "RSD PTR ";
|
|
|
|
return signature == *(Genode::uint64_t *)sign;
|
|
|
|
}
|
|
|
|
} __attribute__((packed));
|
|
|
|
|
|
|
|
Acpi_rsdp const * rsdp = reinterpret_cast<Acpi_rsdp const *>(reinterpret_cast<addr_t>(element) + sizeof(*element));
|
|
|
|
|
|
|
|
if (rsdp->valid() && (rsdp->rsdt || rsdp->xsdt)) {
|
|
|
|
xml.attribute("revision", rsdp->revision);
|
|
|
|
if (rsdp->rsdt)
|
|
|
|
xml.attribute("rsdt", String<32>(Hex(rsdp->rsdt)));
|
|
|
|
|
|
|
|
if (rsdp->xsdt)
|
|
|
|
xml.attribute("xsdt", String<32>(Hex(rsdp->xsdt)));
|
|
|
|
}
|
2017-11-18 14:17:11 +01:00
|
|
|
});
|
|
|
|
}
|
2017-08-11 18:26:29 +02:00
|
|
|
});
|
|
|
|
|
|
|
|
if (!unmap_local(virt_addr, pages, this)) {
|
|
|
|
error("could not setup platform_info ROM - unmap error");
|
|
|
|
return;
|
|
|
|
}
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
region_alloc().free(virt_ptr);
|
2017-08-11 18:26:29 +02:00
|
|
|
|
|
|
|
_rom_fs.insert(
|
|
|
|
new (core_mem_alloc()) Rom_module(phys_addr, rom_size, rom_name));
|
2015-05-11 18:55:30 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-05-05 00:15:50 +02:00
|
|
|
Platform::Platform()
|
|
|
|
:
|
2015-05-11 18:55:30 +02:00
|
|
|
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
_io_mem_alloc(&core_mem_alloc()), _io_port_alloc(&core_mem_alloc()),
|
|
|
|
_irq_alloc(&core_mem_alloc()),
|
|
|
|
_unused_phys_alloc(&core_mem_alloc()),
|
|
|
|
_unused_virt_alloc(&core_mem_alloc()),
|
2015-05-11 18:55:30 +02:00
|
|
|
_init_unused_phys_alloc_done((_init_unused_phys_alloc(), true)),
|
2015-05-13 11:25:11 +02:00
|
|
|
_vm_base(0x2000), /* 2nd page is used as IPC buffer of main thread */
|
2019-02-19 18:13:22 +01:00
|
|
|
_vm_size((CONFIG_WORD_SIZE == 32 ? 3 : 8 )*1024*1024*1024UL - _vm_base),
|
2015-05-05 00:15:50 +02:00
|
|
|
_init_sel4_ipc_buffer_done((init_sel4_ipc_buffer(), true)),
|
|
|
|
_switch_to_core_cspace_done((_switch_to_core_cspace(), true)),
|
2017-06-22 18:47:02 +02:00
|
|
|
_core_page_table_registry(_core_page_table_registry_alloc),
|
2015-05-05 00:15:50 +02:00
|
|
|
_init_core_page_table_registry_done((_init_core_page_table_registry(), true)),
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
_init_allocators_done((_init_allocators(), true)),
|
|
|
|
_core_vm_space(Cap_sel(seL4_CapInitThreadPD),
|
|
|
|
_core_sel_alloc,
|
2015-05-05 00:15:50 +02:00
|
|
|
_phys_alloc,
|
|
|
|
_top_cnode,
|
|
|
|
_core_cnode,
|
|
|
|
_phys_cnode,
|
|
|
|
Core_cspace::CORE_VM_ID,
|
2016-07-08 14:03:43 +02:00
|
|
|
_core_page_table_registry,
|
|
|
|
"core")
|
2015-05-05 00:15:50 +02:00
|
|
|
{
|
2017-06-22 18:47:02 +02:00
|
|
|
platform_in_construction = this;
|
|
|
|
|
2017-08-09 16:28:44 +02:00
|
|
|
/* start benchmarking for CPU utilization in Genode TRACE service */
|
|
|
|
seL4_BenchmarkResetLog();
|
|
|
|
|
2016-07-20 12:02:23 +02:00
|
|
|
/* create notification object for Genode::Lock used by this first thread */
|
|
|
|
Cap_sel lock_sel (INITIAL_SEL_LOCK);
|
|
|
|
Cap_sel core_sel = _core_sel_alloc.alloc();
|
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
{
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
addr_t const phys_addr = Untyped_memory::alloc_page(ram_alloc());
|
2017-06-12 12:41:38 +02:00
|
|
|
seL4_Untyped const service = Untyped_memory::untyped_sel(phys_addr).value();
|
|
|
|
create<Notification_kobj>(service, core_cnode().sel(), core_sel);
|
|
|
|
}
|
2016-07-20 12:02:23 +02:00
|
|
|
|
|
|
|
/* mint a copy of the notification object with badge of lock_sel */
|
|
|
|
_core_cnode.mint(_core_cnode, core_sel, lock_sel);
|
|
|
|
|
|
|
|
/* test signal/wakeup once */
|
|
|
|
seL4_Word sender;
|
|
|
|
seL4_Signal(lock_sel.value());
|
|
|
|
seL4_Wait(lock_sel.value(), &sender);
|
|
|
|
|
|
|
|
ASSERT(sender == INITIAL_SEL_LOCK);
|
|
|
|
|
2017-06-12 12:41:38 +02:00
|
|
|
/* back stack area with page tables */
|
|
|
|
enum { MAX_CORE_THREADS = 32 };
|
2017-06-22 18:47:02 +02:00
|
|
|
_core_vm_space.unsynchronized_alloc_page_tables(stack_area_virtual_base(),
|
|
|
|
stack_virtual_size() *
|
|
|
|
MAX_CORE_THREADS);
|
2017-06-12 12:41:38 +02:00
|
|
|
|
|
|
|
/* add some minor virtual region for dynamic usage by core */
|
|
|
|
addr_t const virt_size = 32 * 1024 * 1024;
|
|
|
|
void * virt_ptr = nullptr;
|
2017-12-11 13:05:23 +01:00
|
|
|
if (_unused_virt_alloc.alloc_aligned(virt_size, &virt_ptr, get_page_size_log2()).ok()) {
|
2017-06-12 12:41:38 +02:00
|
|
|
|
|
|
|
addr_t const virt_addr = (addr_t)virt_ptr;
|
|
|
|
|
|
|
|
/* add to available virtual region of core */
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
_core_mem_alloc.virt_alloc().add_range(virt_addr, virt_size);
|
2017-06-12 12:41:38 +02:00
|
|
|
|
|
|
|
/* back region by page tables */
|
2017-06-22 18:47:02 +02:00
|
|
|
_core_vm_space.unsynchronized_alloc_page_tables(virt_addr, virt_size);
|
2017-06-12 12:41:38 +02:00
|
|
|
}
|
|
|
|
|
2017-08-09 16:28:44 +02:00
|
|
|
/* add idle thread trace subjects */
|
|
|
|
for (unsigned cpu_id = 0; cpu_id < affinity_space().width(); cpu_id ++) {
|
|
|
|
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 15:42:15 +01:00
|
|
|
struct Idle_trace_source : public Trace::Source::Info_accessor,
|
|
|
|
private Trace::Control,
|
|
|
|
private Trace::Source,
|
|
|
|
private Thread_info
|
2017-08-09 16:28:44 +02:00
|
|
|
{
|
|
|
|
Affinity::Location const affinity;
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Trace::Source::Info_accessor interface
|
|
|
|
*/
|
|
|
|
Info trace_source_info() const override
|
|
|
|
{
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
Genode::Thread &myself = *Genode::Thread::myself();
|
|
|
|
addr_t const ipc_buffer = reinterpret_cast<addr_t>(myself.utcb());
|
2017-08-09 16:28:44 +02:00
|
|
|
seL4_IPCBuffer * ipcbuffer = reinterpret_cast<seL4_IPCBuffer *>(ipc_buffer);
|
|
|
|
uint64_t const * buf = reinterpret_cast<uint64_t *>(ipcbuffer->msg);
|
|
|
|
|
|
|
|
seL4_BenchmarkGetThreadUtilisation(tcb_sel.value());
|
2017-09-01 13:24:43 +02:00
|
|
|
uint64_t execution_time = buf[BENCHMARK_IDLE_TCBCPU_UTILISATION];
|
2017-09-11 13:03:28 +02:00
|
|
|
uint64_t sc_time = 0; /* not supported */
|
2017-08-09 16:28:44 +02:00
|
|
|
|
2017-09-12 11:25:03 +02:00
|
|
|
return { Session_label("kernel"), Trace::Thread_name("idle"),
|
2017-09-11 13:03:28 +02:00
|
|
|
Trace::Execution_time(execution_time, sc_time), affinity };
|
2017-08-09 16:28:44 +02:00
|
|
|
}
|
|
|
|
|
2017-09-12 11:25:03 +02:00
|
|
|
Idle_trace_source(Trace::Source_registry ®istry,
|
|
|
|
Platform &platform, Range_allocator &phys_alloc,
|
2017-08-09 16:28:44 +02:00
|
|
|
Affinity::Location affinity)
|
|
|
|
:
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 15:42:15 +01:00
|
|
|
Trace::Control(),
|
2017-08-09 16:28:44 +02:00
|
|
|
Trace::Source(*this, *this), affinity(affinity)
|
|
|
|
{
|
|
|
|
Thread_info::init_tcb(platform, phys_alloc, 0, affinity.xpos());
|
2017-09-12 11:25:03 +02:00
|
|
|
registry.insert(this);
|
2017-08-09 16:28:44 +02:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2017-09-12 11:25:03 +02:00
|
|
|
new (core_mem_alloc())
|
|
|
|
Idle_trace_source(Trace::sources(), *this,
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
_core_mem_alloc.phys_alloc(),
|
2017-08-09 16:28:44 +02:00
|
|
|
Affinity::Location(cpu_id, 0,
|
|
|
|
affinity_space().width(),
|
|
|
|
affinity_space().height()));
|
|
|
|
}
|
|
|
|
|
2016-06-28 14:45:49 +02:00
|
|
|
/* I/O port allocator (only meaningful for x86) */
|
|
|
|
_io_port_alloc.add_range(0, 0x10000);
|
|
|
|
|
2017-12-11 13:05:23 +01:00
|
|
|
/* core log as ROM module */
|
|
|
|
{
|
|
|
|
void * core_local_ptr = nullptr;
|
|
|
|
unsigned const pages = 1;
|
|
|
|
size_t const log_size = pages << get_page_size_log2();
|
|
|
|
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
addr_t const phys_addr = Untyped_memory::alloc_page(ram_alloc());
|
2017-12-11 13:05:23 +01:00
|
|
|
Untyped_memory::convert_to_page_frames(phys_addr, pages);
|
|
|
|
|
|
|
|
/* let one page free after the log buffer */
|
base/core: use references instead of pointers
This patch replaces the former prominent use of pointers by references
wherever feasible. This has the following benefits:
* The contract between caller and callee becomes more obvious. When
passing a reference, the contract says that the argument cannot be
a null pointer. The caller is responsible to ensure that. Therefore,
the use of reference eliminates the need to add defensive null-pointer
checks at the callee site, which sometimes merely exist to be on the
safe side. The bottom line is that the code becomes easier to follow.
* Reference members must be initialized via an object initializer,
which promotes a programming style that avoids intermediate object-
construction states. Within core, there are still a few pointers
as member variables left though. E.g., caused by the late association
of 'Platform_thread' objects with their 'Platform_pd' objects.
* If no pointers are present as member variables, we don't need to
manually provide declarations of a private copy constructor and
an assignment operator to avoid -Weffc++ errors "class ... has
pointer data members [-Werror=effc++]".
This patch also changes a few system bindings on NOVA and Fiasco.OC,
e.g., the return value of the global 'cap_map' accessor has become a
reference. Hence, the patch touches a few places outside of core.
Fixes #3135
2019-01-24 22:00:01 +01:00
|
|
|
region_alloc().alloc_aligned(log_size + get_page_size(), &core_local_ptr, get_page_size_log2());
|
2017-12-11 13:05:23 +01:00
|
|
|
addr_t const core_local_addr = reinterpret_cast<addr_t>(core_local_ptr);
|
|
|
|
|
|
|
|
map_local(phys_addr, core_local_addr, pages, this);
|
|
|
|
memset(core_local_ptr, 0, log_size);
|
|
|
|
|
|
|
|
_rom_fs.insert(new (core_mem_alloc()) Rom_module(phys_addr, log_size,
|
|
|
|
"core_log"));
|
|
|
|
|
|
|
|
init_core_log(Core_log_range { core_local_addr, log_size } );
|
|
|
|
}
|
|
|
|
|
2015-05-11 18:55:30 +02:00
|
|
|
_init_rom_modules();
|
2017-06-22 18:47:02 +02:00
|
|
|
|
|
|
|
platform_in_construction = nullptr;
|
2015-05-01 20:03:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-05-10 19:51:10 +02:00
|
|
|
unsigned Platform::alloc_core_rcv_sel()
|
|
|
|
{
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
Cap_sel rcv_sel = _core_sel_alloc.alloc();
|
2015-05-10 19:51:10 +02:00
|
|
|
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
seL4_SetCapReceivePath(_core_cnode.sel().value(), rcv_sel.value(),
|
|
|
|
_core_cnode.size_log2());
|
2015-05-10 19:51:10 +02:00
|
|
|
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
return rcv_sel.value();
|
2015-05-10 19:51:10 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2015-05-11 08:43:43 +02:00
|
|
|
void Platform::reset_sel(unsigned sel)
|
|
|
|
{
|
sel4: update to version 2.1
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes #1887
2016-02-03 14:50:44 +01:00
|
|
|
_core_cnode.remove(Cap_sel(sel));
|
2015-05-10 19:51:10 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2019-07-24 11:38:05 +02:00
|
|
|
void Platform::wait_for_exit(int const &)
|
2015-05-01 20:03:08 +02:00
|
|
|
{
|
|
|
|
sleep_forever();
|
|
|
|
}
|
|
|
|
|