genode/repos/base-sel4/src/core/stack_area.cc

153 lines
4.0 KiB
C++
Raw Normal View History

/*
* \brief Support code for the thread API
* \author Norman Feske
* \author Stefan Kalkowski
* \date 2010-01-13
*/
/*
* Copyright (C) 2010-2015 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
/* Genode includes */
#include <rm_session/rm_session.h>
#include <ram_session/ram_session.h>
#include <base/printf.h>
#include <base/synced_allocator.h>
#include <base/thread.h>
/* local includes */
#include <platform.h>
#include <map_local.h>
#include <dataspace_component.h>
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
#include <untyped_memory.h>
/* base-internal includes */
#include <base/internal/stack_area.h>
using namespace Genode;
/**
* Region-manager session for allocating stacks
*
* This class corresponds to the managed dataspace that is normally used for
* organizing stacks with the stack area. In contrast to the ordinary
* implementation, core's version does not split between allocation of memory
* and virtual memory management. Due to the missing availability of "real"
* dataspaces and capabilities refering to it without having an entrypoint in
* place, the allocation of a dataspace has no effect, but the attachment of
* the thereby "empty" dataspace is doing both: allocation and attachment.
*/
class Stack_area_rm_session : public Rm_session
{
private:
using Ds_slab = Synced_allocator<Tslab<Dataspace_component,
get_page_size()> >;
Ds_slab _ds_slab { platform()->core_mem_alloc() };
enum { verbose = false };
public:
/**
* Allocate and attach on-the-fly backing store to the stack area
*/
Local_addr attach(Dataspace_capability ds_cap, /* ignored capability */
size_t size, off_t offset,
bool use_local_addr, Local_addr local_addr,
bool executable)
{
size = round_page(size);
/* allocate physical memory */
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
Range_allocator &phys_alloc = *platform_specific()->ram_alloc();
size_t const num_pages = size >> get_page_size_log2();
addr_t const phys = Untyped_memory::alloc_pages(phys_alloc, num_pages);
Untyped_memory::convert_to_page_frames(phys, num_pages);
Dataspace_component *ds = new (&_ds_slab)
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
Dataspace_component(size, 0, phys, CACHED, true, 0);
if (!ds) {
PERR("dataspace for core stack does not exist");
return (addr_t)0;
}
addr_t const core_local_addr =
stack_area_virtual_base() + (addr_t)local_addr;
if (verbose)
PDBG("core_local_addr = %lx, phys_addr = %lx, size = 0x%zx",
core_local_addr, ds->phys_addr(), ds->size());
if (!map_local(ds->phys_addr(), core_local_addr,
ds->size() >> get_page_size_log2())) {
PERR("could not map phys %lx at local %lx",
ds->phys_addr(), core_local_addr);
return (addr_t)0;
}
ds->assign_core_local_addr((void*)core_local_addr);
return local_addr;
}
void detach(Local_addr local_addr) { PWRN("Not implemented!"); }
Pager_capability add_client(Thread_capability) {
return Pager_capability(); }
void remove_client(Pager_capability) { }
void fault_handler(Signal_context_capability) { }
State state() { return State(); }
Dataspace_capability dataspace() { return Dataspace_capability(); }
};
class Stack_area_ram_session : public Ram_session
{
public:
Ram_dataspace_capability alloc(size_t size, Cache_attribute cached) {
return reinterpret_cap_cast<Ram_dataspace>(Native_capability()); }
void free(Ram_dataspace_capability ds) {
PWRN("Not implemented!"); }
int ref_account(Ram_session_capability ram_session) { return 0; }
int transfer_quota(Ram_session_capability ram_session, size_t amount) {
return 0; }
size_t quota() { return 0; }
size_t used() { return 0; }
};
/**
* Return single instance of the context-area RM and RAM session
*/
namespace Genode {
Rm_session *env_stack_area_rm_session()
{
static Stack_area_rm_session inst;
return &inst;
}
Ram_session *env_stack_area_ram_session()
{
static Stack_area_ram_session inst;
return &inst;
}
}