genode/repos/base-sel4/src/base/internal/capability_space_sel4.h

272 lines
6.2 KiB
C
Raw Normal View History

2015-05-10 19:51:10 +02:00
/*
* \brief seL4-specific capability-space management
* \author Norman Feske
* \date 2015-05-08
*/
/*
* Copyright (C) 2015 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#ifndef _BASE__INTERNAL__CAPABILITY_SPACE_SEL4_H_
#define _BASE__INTERNAL__CAPABILITY_SPACE_SEL4_H_
/* base includes */
#include <util/avl_tree.h>
#include <base/lock.h>
/* base-internal includes */
#include <internal/capability_space.h>
#include <internal/assert.h>
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
namespace Genode {
template <unsigned, unsigned, typename> class Capability_space_sel4;
class Cap_sel
{
private:
addr_t _value;
public:
explicit Cap_sel(addr_t value) : _value(value) { }
addr_t value() const { return _value; }
};
}
2015-05-10 19:51:10 +02:00
/**
* Platform-specific supplement to the generic 'Capability_space' interface
*/
namespace Genode { namespace Capability_space {
/**
* Information needed to transfer capability via the kernel's IPC mechanism
*/
struct Ipc_cap_data
{
Rpc_obj_key rpc_obj_key;
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
Cap_sel sel;
2015-05-11 08:43:43 +02:00
Ipc_cap_data(Rpc_obj_key rpc_obj_key, unsigned sel)
: rpc_obj_key(rpc_obj_key), sel(sel) { }
2015-05-10 19:51:10 +02:00
};
/**
* Retrieve IPC data for given capability
*/
Ipc_cap_data ipc_cap_data(Native_capability const &cap);
/**
* Allocate unused selector for receiving a capability via IPC
*/
unsigned alloc_rcv_sel();
2015-05-11 08:43:43 +02:00
/**
* Delete selector but retain allocation
*
* This function is used when a delegated capability selector is replaced
* with an already known selector. The delegated selector is discarded.
*/
void reset_sel(unsigned sel);
/**
* Lookup capability by its RPC object key
*/
Native_capability lookup(Rpc_obj_key key);
/**
* Import capability into the component's capability space
*/
Native_capability import(Ipc_cap_data ipc_cap_data);
2015-05-10 19:51:10 +02:00
} }
namespace Genode
{
enum {
INITIAL_SEL_PARENT = 1,
INITIAL_SEL_CNODE = 2,
INITIAL_SEL_END
};
enum {
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
CSPACE_SIZE_LOG2 = 8,
NUM_CORE_MANAGED_SEL_LOG2 = 7,
};
};
2015-05-10 19:51:10 +02:00
/**
* Capability space template
*
* The capability space of core and non-core components differ in two ways.
*
* First, core must keep track of all capabilities of the system. Hence, its
* capability space must be dimensioned larger.
*
* Second, core has to maintain the information about the CAP session that
* was used to allocate the capability to prevent misbehaving clients from
* freeing capabilities allocated from another component. This information
* is part of the core-specific 'Native_capability::Data' structure.
*/
2015-05-11 08:43:43 +02:00
template <unsigned NUM_CAPS, unsigned _NUM_CORE_MANAGED_CAPS, typename CAP_DATA>
2015-05-10 19:51:10 +02:00
class Genode::Capability_space_sel4
{
public:
/*
* The capability space consists of two parts. The lower part is
* populated with statically-defined capabilities whereas the upper
2015-05-11 08:43:43 +02:00
* part is dynamically managed by the component. The
* 'NUM_CORE_MANAGED_CAPS' defines the size of the first part.
2015-05-10 19:51:10 +02:00
*/
2015-05-11 08:43:43 +02:00
enum { NUM_CORE_MANAGED_CAPS = _NUM_CORE_MANAGED_CAPS };
2015-05-10 19:51:10 +02:00
private:
typedef CAP_DATA Data;
/**
* Supplement Native_capability::Data with the meta data needed to
* manage it in an AVL tree
*/
struct Tree_managed_data : Data, Avl_node<Tree_managed_data>
{
template <typename... ARGS>
Tree_managed_data(ARGS... args) : Data(args...) { }
Tree_managed_data() { }
bool higher(Tree_managed_data *data)
{
return data->rpc_obj_key().value() > rpc_obj_key().value();
}
Tree_managed_data *find_by_key(Rpc_obj_key key)
{
if (key.value() == rpc_obj_key().value()) return this;
Tree_managed_data *data =
this->child(key.value() > rpc_obj_key().value());
return data ? data->find_by_key(key) : nullptr;
}
};
Tree_managed_data _caps_data[NUM_CAPS];
Avl_tree<Tree_managed_data> _tree;
2015-05-11 08:43:43 +02:00
Lock mutable _lock;
2015-05-10 19:51:10 +02:00
/**
* Calculate index into _caps_data for capability data object
*/
unsigned _index(Data const &data) const
{
addr_t const offset = (addr_t)&data - (addr_t)_caps_data;
return offset / sizeof(_caps_data[0]);
}
/**
* Return true if capability is locally managed by the component
*/
2015-05-11 08:43:43 +02:00
bool _is_core_managed(Data &data) const
2015-05-10 19:51:10 +02:00
{
2015-05-11 08:43:43 +02:00
return _index(data) < NUM_CORE_MANAGED_CAPS;
2015-05-10 19:51:10 +02:00
}
void _remove(Native_capability::Data &data)
{
if (_caps_data[_index(data)].rpc_obj_key().valid())
_tree.remove(static_cast<Tree_managed_data *>(&data));
_caps_data[_index(data)] = Tree_managed_data();
}
public:
/*****************************************************
** Support for the Core_capability_space interface **
*****************************************************/
/**
* Create Genode capability for kernel cap selector 'sel'
*
* The arguments following the selector are passed to the constructor
* of the 'Native_capability::Data' type.
*/
template <typename... ARGS>
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
Native_capability::Data &create_capability(Cap_sel cap_sel, ARGS... args)
2015-05-10 19:51:10 +02:00
{
Lock::Guard guard(_lock);
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
addr_t const sel = cap_sel.value();
2015-05-10 19:51:10 +02:00
ASSERT(!_caps_data[sel].rpc_obj_key().valid());
ASSERT(sel < NUM_CAPS);
_caps_data[sel] = Tree_managed_data(args...);
if (_caps_data[sel].rpc_obj_key().valid())
_tree.insert(&_caps_data[sel]);
return _caps_data[sel];
}
/**
* Return kernel cap selector
*/
unsigned sel(Data const &data) const { return _index(data); }
/************************************************
** Support for the Capability_space interface **
************************************************/
void dec_ref(Data &data)
{
Lock::Guard guard(_lock);
if (!_is_core_managed(data) && !data.dec_ref())
2015-05-10 19:51:10 +02:00
_remove(data);
}
void inc_ref(Data &data)
{
Lock::Guard guard(_lock);
2015-05-11 08:43:43 +02:00
if (!_is_core_managed(data)) {
2015-05-10 19:51:10 +02:00
data.inc_ref();
2015-05-11 08:43:43 +02:00
}
2015-05-10 19:51:10 +02:00
}
Rpc_obj_key rpc_obj_key(Data const &data) const
{
return data.rpc_obj_key();
}
Capability_space::Ipc_cap_data ipc_cap_data(Data const &data) const
{
return { rpc_obj_key(data), sel(data) };
}
2015-05-11 08:43:43 +02:00
Data *lookup(Rpc_obj_key key) const
{
Lock::Guard guard(_lock);
if (!_tree.first())
return nullptr;
return _tree.first()->find_by_key(key);
}
2015-05-10 19:51:10 +02:00
};
#endif /* _BASE__INTERNAL__CAPABILITY_SPACE_SEL4_H_ */