genode/repos/base-sel4/src/core/include/initial_untyped_pool.h

261 lines
7.6 KiB
C
Raw Normal View History

sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/*
* \brief Initial pool of untyped memory
* \author Norman Feske
* \date 2016-02-11
*/
/*
* Copyright (C) 2016-2017 Genode Labs GmbH
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
*/
#ifndef _CORE__INCLUDE__INITIAL_UNTYPED_POOL_H_
#define _CORE__INCLUDE__INITIAL_UNTYPED_POOL_H_
/* Genode includes */
#include <base/exception.h>
#include <base/log.h>
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/* core-local includes */
#include <sel4_boot_info.h>
/* seL4 includes */
#include <sel4/sel4.h>
namespace Genode { class Initial_untyped_pool; }
class Genode::Initial_untyped_pool
{
private:
/* base limit on sel4's autoconf.h */
enum { MAX_UNTYPED = (unsigned)CONFIG_MAX_NUM_BOOTINFO_UNTYPED_CAPS };
struct Free_offset { addr_t value = 0; };
Free_offset _free_offset[MAX_UNTYPED];
public:
class Initial_untyped_pool_exhausted : Exception { };
struct Range
{
/* core-local cap selector */
unsigned const sel;
/* index into 'untypedSizeBitsList' */
unsigned const index = sel - sel4_boot_info().untyped.start;
/* original size of untyped memory range */
2017-06-12 12:41:38 +02:00
size_t const size = 1UL << sel4_boot_info().untypedList[index].sizeBits;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/* physical address of the begin of the untyped memory range */
2017-06-12 12:41:38 +02:00
addr_t const phys = sel4_boot_info().untypedList[index].paddr;
bool const device = sel4_boot_info().untypedList[index].isDevice;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/* offset to the unused part of the untyped memory range */
addr_t &free_offset;
Range(Initial_untyped_pool &pool, unsigned sel)
:
sel(sel), free_offset(pool._free_offset[index].value)
{ }
};
private:
/**
* Calculate free index after allocation
*/
addr_t _align_offset(Range &range, size_t size_log2)
{
/*
* The seL4 kernel naturally aligns allocations within untuped
* memory ranges. So we have to apply the same policy to our
* shadow version of the kernel's 'FreeIndex'.
*/
addr_t const aligned_free_offset = align_addr(range.free_offset,
size_log2);
return aligned_free_offset + (1 << size_log2);
}
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/**
* Apply functor to each untyped memory range
*
* The functor is called with 'Range &' as argument.
*/
template <typename FUNC>
void for_each_range(FUNC const &func)
{
seL4_BootInfo const &bi = sel4_boot_info();
for (unsigned sel = bi.untyped.start; sel < bi.untyped.end; sel++) {
Range range(*this, sel);
func(range);
}
}
2017-06-22 18:47:02 +02:00
public:
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/**
* Return selector of untyped memory range where the allocation of
* the specified size is possible
*
* \param kernel object size
*
* This function models seL4's allocation policy of untyped memory. It
* is solely used at boot time to setup core's initial kernel objects
* from the initial pool of untyped memory ranges as reported by the
* kernel.
*
* \throw Initial_untyped_pool_exhausted
*/
unsigned alloc(size_t size_log2)
{
enum { UNKNOWN = 0 };
unsigned sel = UNKNOWN;
/*
* Go through the known initial untyped memory ranges to find
* a range that is able to host a kernel object of 'size'.
*/
for_each_range([&] (Range &range) {
2017-06-12 12:41:38 +02:00
/* ignore device memory */
if (range.device)
return;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/* calculate free index after allocation */
addr_t const new_free_offset = _align_offset(range, size_log2);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/* check if allocation fits within current untyped memory range */
if (new_free_offset > range.size)
return;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
if (sel == UNKNOWN) {
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
sel = range.sel;
return;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
}
/* check which range is smaller - take that */
addr_t const rest = range.size - new_free_offset;
Range best_fit(*this, sel);
addr_t const new_free_offset_best = _align_offset(best_fit, size_log2);
addr_t const rest_best = best_fit.size - new_free_offset_best;
if (rest_best >= rest)
/* current range fits better then best range */
sel = range.sel;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
});
if (sel == UNKNOWN) {
warning("Initial_untyped_pool exhausted");
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
throw Initial_untyped_pool_exhausted();
}
Range best_fit(*this, sel);
addr_t const new_free_offset = _align_offset(best_fit, size_log2);
ASSERT(new_free_offset <= best_fit.size);
/*
* We found a matching range, consume 'size' and report the
* selector. The returned selector is used by the caller
* of 'alloc' to perform the actual kernel-object creation.
*/
best_fit.free_offset = new_free_offset;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
return best_fit.sel;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
}
/**
2017-06-22 18:47:02 +02:00
* Convert (remainder) of the initial untyped memory into untyped
* objects of size_log2 and up to a maximum as specified by max_memory
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
*/
2017-06-22 18:47:02 +02:00
template <typename FUNC>
void turn_into_untyped_object(addr_t const node_index,
FUNC const & func,
addr_t const size_log2 = get_page_size_log2(),
addr_t max_memory = 0UL - 0x1000UL)
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
{
for_each_range([&] (Range &range) {
/*
* The kernel limits the maximum number of kernel objects to
* be created via a single untyped-retype operation. So we
* need to iterate for each range, converting a limited batch
* of pages in each step.
*/
for (;;) {
addr_t const page_aligned_free_offset =
2017-06-22 18:47:02 +02:00
align_addr(range.free_offset, size_log2);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/* back out if no further page can be allocated */
2017-06-22 18:47:02 +02:00
if (page_aligned_free_offset + (1UL << size_log2) > range.size)
return;
if (!max_memory)
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
return;
size_t const remaining_size = range.size - page_aligned_free_offset;
size_t const retype_size_limit = get_page_size()*256;
2017-06-22 18:47:02 +02:00
size_t const batch_size = min(min(remaining_size, retype_size_limit), max_memory);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
/* mark consumed untyped memory range as allocated */
range.free_offset += batch_size;
addr_t const phys_addr = range.phys + page_aligned_free_offset;
2017-06-22 18:47:02 +02:00
size_t const num_pages = batch_size / (1UL << size_log2);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
seL4_Untyped const service = range.sel;
2017-06-22 18:47:02 +02:00
addr_t const type = seL4_UntypedObject;
addr_t const size_bits = size_log2;
seL4_CNode const root = Core_cspace::top_cnode_sel();
2017-06-22 18:47:02 +02:00
addr_t const node_depth = Core_cspace::NUM_TOP_SEL_LOG2;
addr_t const node_offset = phys_addr >> size_log2;
addr_t const num_objects = num_pages;
/* XXX skip memory because of limited untyped cnode range */
if (node_offset >= (1UL << (32 - get_page_size_log2()))) {
Genode::warning(range.device ? "device" : " ", " memory in range ", Hex_range<addr_t>(range.phys, range.size), " is unavailable (due to limited untyped cnode range)");
return;
}
long const ret = seL4_Untyped_Retype(service,
type,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
if (ret != 0) {
error(__func__, ": seL4_Untyped_Retype (untyped) "
"returned ", ret);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
return;
}
2017-06-12 12:41:38 +02:00
2017-06-22 18:47:02 +02:00
/* track memory left to be converted */
max_memory -= batch_size;
2017-06-12 12:41:38 +02:00
/* convert device memory directly into page frames */
if (range.device) {
size_t const num_pages = batch_size >> get_page_size_log2();
Untyped_memory::convert_to_page_frames(phys_addr, num_pages);
}
2017-06-22 18:47:02 +02:00
/* invoke callback about the range */
func(phys_addr, num_pages << size_log2, range.device);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
}
});
}
};
#endif /* _CORE__INCLUDE__INITIAL_UNTYPED_POOL_H_ */