sel4: update to version 2.1

This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.

In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.

The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.

After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.

The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.

As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.

Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).

Fixes #1887
This commit is contained in:
Norman Feske 2016-02-03 14:50:44 +01:00 committed by Christian Helmuth
parent 2df6cd64d4
commit 9e6f3be806
44 changed files with 1174 additions and 1162 deletions

View File

@ -17,7 +17,7 @@
/* Genode includes */
#include <base/printf.h>
#define assert(v) do { \
#define seL4_Assert(v) do { \
if (!(v)) { \
PDBG("assertion failed: %s", #v); \
for (;;); \

View File

@ -18,6 +18,10 @@
#define DEBUG 1
#define CONFIG_MAX_NUM_BOOTINFO_UNTYPED_CAPS 800
#define CONFIG_MAX_NUM_BOOTINFO_UNTYPED_CAPS 166
#define CONFIG_NUM_PRIORITIES 256
#define CONFIG_MAX_NUM_BOOTINFO_DEVICE_REGIONS 199
#endif /* _INCLUDE__SEL4__AUTOCONF_H_ */

View File

@ -16,13 +16,12 @@ LINKER_OPT_PREFIX := -Wl,
build_kernel:
$(VERBOSE)$(MAKE) \
TOOLPREFIX=$(CROSS_DEV_PREFIX) \
ARCH=ia32 PLAT=pc99 DEBUG=1 \
ARCH=x86 SEL4_ARCH=ia32 PLAT=pc99 DEBUG=1 \
LDFLAGS+=-nostdlib LDFLAGS+=-Wl,-nostdlib \
$(addprefix LDFLAGS+=$(LINKER_OPT_PREFIX),$(LD_MARCH)) \
CFLAGS+=-fno-builtin-printf \
$(addprefix CFLAGS+=,$(CC_MARCH)) \
CONFIG_KERNEL_EXTRA_CPPFLAGS+=-DCONFIG_MAX_NUM_IOAPIC=1 \
CONFIG_KERNEL_EXTRA_CPPFLAGS+=-DCONFIG_IRQ_IOAPIC=1 \
CONFIG_KERNEL_EXTRA_CPPFLAGS+=-DCONFIG_NUM_DOMAINS=1 \
SOURCE_ROOT=$(SEL4_DIR) -f$(SEL4_DIR)/Makefile

View File

@ -20,23 +20,32 @@ LIBSEL4_DIR := $(call select_from_ports,sel4)/src/kernel/sel4/libsel4
# 'include_arch/<arch>/sel4' directories into our local build directory.
#
SEL4_ARCH_INCLUDES := objecttype.h types.h bootinfo.h constants.h functions.h \
pfIPC.h syscalls.h exIPC.h invocation.h
SEL4_ARCH_INCLUDES := simple_types.h types.h constants.h objecttype.h \
functions.h syscalls.h invocation.h deprecated.h
SEL4_INCLUDES := objecttype.h types.h bootinfo.h errors.h constants.h \
messages.h sel4.h benchmark.h macros.h \
types_gen.h syscall.h invocation.h
ARCH_INCLUDES := objecttype.h types.h constants.h functions.h deprecated.h \
pfIPC.h syscalls.h exIPC.h invocation.h simple_types.h
SEL4_INCLUDE_SYMLINKS += $(addprefix $(BUILD_BASE_DIR)/include/sel4/, $(SEL4_INCLUDES))
SEL4_INCLUDE_SYMLINKS += $(addprefix $(BUILD_BASE_DIR)/include/sel4/arch/,$(SEL4_ARCH_INCLUDES))
SEL4_INCLUDE_SYMLINKS += $(BUILD_BASE_DIR)/include/sel4/interfaces/sel4_client.h
INCLUDES := objecttype.h types.h bootinfo.h errors.h constants.h \
messages.h sel4.h macros.h simple_types.h types_gen.h syscall.h \
invocation.h shared_types_gen.h debug_assert.h shared_types.h \
sel4.h deprecated.h
all: $(SEL4_INCLUDE_SYMLINKS)
INCLUDE_SYMLINKS += $(addprefix $(BUILD_BASE_DIR)/include/sel4/, $(INCLUDES))
INCLUDE_SYMLINKS += $(addprefix $(BUILD_BASE_DIR)/include/sel4/arch/, $(ARCH_INCLUDES))
INCLUDE_SYMLINKS += $(addprefix $(BUILD_BASE_DIR)/include/sel4/sel4_arch/,$(SEL4_ARCH_INCLUDES))
INCLUDE_SYMLINKS += $(BUILD_BASE_DIR)/include/interfaces/sel4_client.h
all: $(INCLUDE_SYMLINKS)
#
# Plain symlinks to existing headers
#
$(BUILD_BASE_DIR)/include/sel4/arch/%.h: $(LIBSEL4_DIR)/arch_include/ia32/sel4/arch/%.h
$(BUILD_BASE_DIR)/include/sel4/sel4_arch/%.h: $(LIBSEL4_DIR)/sel4_arch_include/ia32/sel4/sel4_arch/%.h
$(VERBOSE)mkdir -p $(dir $@)
$(VERBOSE)ln -sf $< $@
$(BUILD_BASE_DIR)/include/sel4/arch/%.h: $(LIBSEL4_DIR)/arch_include/x86/sel4/arch/%.h
$(VERBOSE)mkdir -p $(dir $@)
$(VERBOSE)ln -sf $< $@
@ -53,6 +62,12 @@ $(BUILD_BASE_DIR)/include/sel4/types_gen.h: $(LIBSEL4_DIR)/include/sel4/types.bf
$(VERBOSE)python $(LIBSEL4_DIR)/tools/bitfield_gen.py \
--environment libsel4 "$<" $@
$(BUILD_BASE_DIR)/include/sel4/shared_types_gen.h: $(LIBSEL4_DIR)/include/sel4/shared_types.bf
$(MSG_CONVERT)$(notdir $@)
$(VERBOSE)mkdir -p $(dir $@)
$(VERBOSE)python $(LIBSEL4_DIR)/tools/bitfield_gen.py \
--environment libsel4 "$<" $@
$(BUILD_BASE_DIR)/include/sel4/syscall.h: $(LIBSEL4_DIR)/include/api/syscall.xml
$(MSG_CONVERT)$(notdir $@)
$(VERBOSE)mkdir -p $(dir $@)
@ -65,19 +80,27 @@ $(BUILD_BASE_DIR)/include/sel4/invocation.h: $(LIBSEL4_DIR)/include/interfaces/s
$(VERBOSE)python $(LIBSEL4_DIR)/tools/invocation_header_gen.py \
--xml $< --libsel4 --dest $@
$(BUILD_BASE_DIR)/include/sel4/arch/invocation.h: $(LIBSEL4_DIR)/arch_include/ia32/interfaces/sel4arch.xml
$(BUILD_BASE_DIR)/include/sel4/sel4_arch/invocation.h: $(LIBSEL4_DIR)/sel4_arch_include/ia32/interfaces/sel4arch.xml
$(MSG_CONVERT)$(notdir $@)
$(VERBOSE)mkdir -p $(dir $@)
$(VERBOSE)python $(LIBSEL4_DIR)/tools/invocation_header_gen.py \
--xml $< --libsel4 --dest $@
$(BUILD_BASE_DIR)/include/sel4/arch/invocation.h: $(LIBSEL4_DIR)/arch_include/x86/interfaces/sel4arch.xml
$(MSG_CONVERT)arch/$(notdir $@)
$(VERBOSE)mkdir -p $(dir $@)
$(VERBOSE)python $(LIBSEL4_DIR)/tools/invocation_header_gen.py \
--xml $< --libsel4 --arch --dest $@
--xml $< --libsel4 --sel4_arch --dest $@
SEL4_CLIENT_H_SRC := $(LIBSEL4_DIR)/include/interfaces/sel4.xml \
$(LIBSEL4_DIR)/arch_include/ia32/interfaces/sel4arch.xml
SEL4_CLIENT_H_SRC := $(LIBSEL4_DIR)/sel4_arch_include/ia32/interfaces/sel4arch.xml \
$(LIBSEL4_DIR)/arch_include/x86/interfaces/sel4arch.xml \
$(LIBSEL4_DIR)/include/interfaces/sel4.xml
$(BUILD_BASE_DIR)/include/sel4/interfaces/sel4_client.h: $(SEL4_CLIENT_H_SRC)
$(BUILD_BASE_DIR)/include/interfaces/sel4_client.h: $(SEL4_CLIENT_H_SRC)
$(MSG_CONVERT)$(notdir $@)
$(VERBOSE)mkdir -p $(dir $@)
$(VERBOSE)python $(LIBSEL4_DIR)/tools/syscall_stub_gen.py \
-a ia32 -o $@ $(SEL4_CLIENT_H_SRC)
--buffer -a ia32 -o $@ $(SEL4_CLIENT_H_SRC)
endif

View File

@ -1 +1 @@
aaedf65867d799944024bab904c344c160069c0f
994f493f94aea91a945653a057f025b8f29fda83

View File

@ -3,9 +3,6 @@ VERSION := git
DOWNLOADS := sel4.git
URL(sel4) := https://github.com/seL4/seL4.git
# experimental branch
REV(sel4) := b6fbb78cb1233aa8549ea3acb90524306f49a8d2
# master branch, version 2.1
REV(sel4) := 0115ad1d0d7a871d637f8ceb79e37b46f9981249
DIR(sel4) := src/kernel/sel4
PATCHES := src/kernel/macros.patch
PATCH_OPT := -p1 -d src/kernel/sel4

View File

@ -1,9 +0,0 @@
build { core }
create_boot_directory
build_boot_image "core"
append qemu_args " -nographic -m 64 "
run_genode_until forever

View File

@ -1,12 +0,0 @@
# enable special handling in tool/run/boot_dir/sel4
set core_test "sel4"
build { test/sel4 }
create_boot_directory
build_boot_image "test-sel4"
append qemu_args " -nographic -m 64 "
run_genode_until forever

View File

@ -94,7 +94,7 @@ namespace {
Native_capability Capability_space::create_ep_cap(Thread_base &ep_thread)
{
unsigned const ep_sel = ep_thread.tid().ep_sel;
Cap_sel const ep_sel = Cap_sel(ep_thread.tid().ep_sel);
Native_capability::Data &data =
local_capability_space().create_capability(ep_sel, Rpc_obj_key());
@ -147,7 +147,9 @@ unsigned Capability_space::alloc_rcv_sel()
void Capability_space::reset_sel(unsigned sel)
{
PDBG("not implemented");
int ret = seL4_CNode_Delete(INITIAL_SEL_CNODE, sel, CSPACE_SIZE_LOG2);
if (ret != 0)
PWRN("seL4_CNode_Delete returned %d", ret);
}

View File

@ -25,6 +25,7 @@
/* base-internal includes */
#include <internal/capability_data.h>
namespace Genode { namespace Capability_space {
/**

View File

@ -22,7 +22,23 @@
#include <internal/capability_space.h>
#include <internal/assert.h>
namespace Genode { template <unsigned, unsigned, typename> class Capability_space_sel4; }
namespace Genode {
template <unsigned, unsigned, typename> class Capability_space_sel4;
class Cap_sel
{
private:
addr_t _value;
public:
explicit Cap_sel(addr_t value) : _value(value) { }
addr_t value() const { return _value; }
};
}
/**
@ -36,7 +52,7 @@ namespace Genode { namespace Capability_space {
struct Ipc_cap_data
{
Rpc_obj_key rpc_obj_key;
unsigned sel;
Cap_sel sel;
Ipc_cap_data(Rpc_obj_key rpc_obj_key, unsigned sel)
: rpc_obj_key(rpc_obj_key), sel(sel) { }
@ -81,8 +97,8 @@ namespace Genode
};
enum {
CSPACE_SIZE_LOG2 = 12,
NUM_CORE_MANAGED_SEL_LOG2 = 10,
CSPACE_SIZE_LOG2 = 8,
NUM_CORE_MANAGED_SEL_LOG2 = 7,
};
};
@ -186,10 +202,12 @@ class Genode::Capability_space_sel4
* of the 'Native_capability::Data' type.
*/
template <typename... ARGS>
Native_capability::Data &create_capability(unsigned sel, ARGS... args)
Native_capability::Data &create_capability(Cap_sel cap_sel, ARGS... args)
{
Lock::Guard guard(_lock);
addr_t const sel = cap_sel.value();
ASSERT(!_caps_data[sel].rpc_obj_key().valid());
ASSERT(sel < NUM_CAPS);

View File

@ -18,7 +18,7 @@
#include <base/thread.h>
/* seL4 includes */
#include <sel4/interfaces/sel4_client.h>
#include <sel4/sel4.h>
static inline void kernel_debugger_outstring(char const *msg)

View File

@ -23,7 +23,7 @@
#include <internal/kernel_debugger.h>
/* seL4 includes */
#include <sel4/interfaces/sel4_client.h>
#include <sel4/sel4.h>
using namespace Genode;
@ -81,7 +81,7 @@ static seL4_MessageInfo_t new_seL4_message(Msgbuf_base &msg,
Capability_space::ipc_cap_data(cap);
seL4_SetMR(MR_IDX_CAPS + i, ipc_cap_data.rpc_obj_key.value());
seL4_SetCap(sel4_sel_cnt++, ipc_cap_data.sel);
seL4_SetCap(sel4_sel_cnt++, ipc_cap_data.sel.value());
} else {
seL4_SetMR(MR_IDX_CAPS + i, Rpc_obj_key::INVALID);
}
@ -333,7 +333,7 @@ void Ipc_client::_call()
seL4_MessageInfo_t const request_msg_info =
new_seL4_message(*_snd_msg, _write_offset);
unsigned const dst_sel = Capability_space::ipc_cap_data(_dst).sel;
unsigned const dst_sel = Capability_space::ipc_cap_data(_dst).sel.value();
seL4_MessageInfo_t const reply_msg_info =
seL4_Call(dst_sel, request_msg_info);
@ -374,7 +374,7 @@ void Ipc_server::_wait()
{
seL4_Word badge = Rpc_obj_key::INVALID;
seL4_MessageInfo_t const msg_info =
seL4_Wait(Thread_base::myself()->tid().ep_sel, &badge);
seL4_Recv(Thread_base::myself()->tid().ep_sel, &badge);
decode_seL4_message(badge, msg_info, *_rcv_msg);
@ -401,7 +401,7 @@ void Ipc_server::_reply_wait()
new_seL4_message(*_snd_msg, _write_offset);
seL4_MessageInfo_t const request_msg_info =
seL4_ReplyWait(Thread_base::myself()->tid().ep_sel,
seL4_ReplyRecv(Thread_base::myself()->tid().ep_sel,
reply_msg_info, &badge);
decode_seL4_message(badge, request_msg_info, *_rcv_msg);

View File

@ -17,7 +17,7 @@
#include <cpu/memory_barrier.h>
/* seL4 includes */
#include <sel4/interfaces/sel4_client.h>
#include <sel4/sel4.h>
using namespace Genode;

View File

@ -15,7 +15,7 @@
*/
/* seL4 includes */
#include <sel4/interfaces/sel4_client.h>
#include <sel4/sel4.h>
static inline void thread_yield() { seL4_Yield(); }

View File

@ -83,7 +83,7 @@ Capability_space::create_rpc_obj_cap(Native_capability ep_cap,
Rpc_obj_key rpc_obj_key)
{
/* allocate core-local selector for RPC object */
unsigned const rpc_obj_sel = platform_specific()->alloc_core_sel();
Cap_sel const rpc_obj_sel = platform_specific()->core_sel_alloc().alloc();
/* create Genode capability */
Native_capability::Data &data =
@ -92,15 +92,15 @@ Capability_space::create_rpc_obj_cap(Native_capability ep_cap,
ASSERT(ep_cap.valid());
unsigned const ep_sel = local_capability_space().sel(*ep_cap.data());
Cap_sel const ep_sel(local_capability_space().sel(*ep_cap.data()));
/* mint endpoint capability into RPC object capability */
{
seL4_CNode const service = seL4_CapInitThreadCNode;
seL4_Word const dest_index = rpc_obj_sel;
seL4_Word const dest_index = rpc_obj_sel.value();
uint8_t const dest_depth = 32;
seL4_CNode const src_root = seL4_CapInitThreadCNode;
seL4_Word const src_index = ep_sel;
seL4_Word const src_index = ep_sel.value();
uint8_t const src_depth = 32;
seL4_CapRights const rights = seL4_AllRights;
seL4_CapData_t const badge = seL4_CapData_Badge_new(rpc_obj_key.value());
@ -126,7 +126,7 @@ Capability_space::create_rpc_obj_cap(Native_capability ep_cap,
Native_capability Capability_space::create_ep_cap(Thread_base &ep_thread)
{
unsigned const ep_sel = ep_thread.tid().ep_sel;
Cap_sel const ep_sel(ep_thread.tid().ep_sel);
/* entrypoint capabilities are not allocated from a CAP session */
Cap_session const *cap_session = nullptr;

View File

@ -23,6 +23,7 @@
#include <platform.h>
#include <map_local.h>
#include <dataspace_component.h>
#include <untyped_memory.h>
using namespace Genode;
@ -63,14 +64,13 @@ class Context_area_rm_session : public Rm_session
size = round_page(size);
/* allocate physical memory */
Untyped_address const untyped_addr =
Untyped_memory::alloc(*platform_specific()->ram_alloc(), size);
Untyped_memory::convert_to_page_frames(untyped_addr.phys(),
size >> get_page_size_log2());
Range_allocator &phys_alloc = *platform_specific()->ram_alloc();
size_t const num_pages = size >> get_page_size_log2();
addr_t const phys = Untyped_memory::alloc_pages(phys_alloc, num_pages);
Untyped_memory::convert_to_page_frames(phys, num_pages);
Dataspace_component *ds = new (&_ds_slab)
Dataspace_component(size, 0, untyped_addr.phys(), CACHED, true, 0);
Dataspace_component(size, 0, phys, CACHED, true, 0);
if (!ds) {
PERR("dataspace for core context does not exist");
return (addr_t)0;

View File

@ -0,0 +1,35 @@
/*
* \brief Interface for capability-selector allocator
* \author Norman Feske
* \date 2015-05-08
*/
/*
* Copyright (C) 2015 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#ifndef _CORE__INCLUDE__CAP_SEL_ALLOC_H_
#define _CORE__INCLUDE__CAP_SEL_ALLOC_H_
/* Genode includes */
#include <base/exception.h>
/* base-internal includes */
#include <internal/capability_space_sel4.h>
namespace Genode { struct Cap_sel_alloc; }
struct Genode::Cap_sel_alloc
{
struct Alloc_failed : Exception { };
virtual Cap_sel alloc() = 0;
virtual void free(Cap_sel) = 0;
};
#endif /* _CORE__INCLUDE__CAP_SEL_ALLOC_H_ */

View File

@ -33,71 +33,73 @@ class Genode::Cnode_base
{
private:
unsigned const _sel;
size_t const _size_log2;
Cap_sel const _sel;
size_t const _size_log2;
public:
unsigned sel() const { return _sel; }
size_t size_log2() const { return _size_log2; }
typedef Cnode_index Index;
Cap_sel sel() const { return _sel; }
size_t size_log2() const { return _size_log2; }
/**
* Copy selector from another CNode
*/
void copy(Cnode_base const &from, unsigned from_idx, unsigned to_idx)
void copy(Cnode_base const &from, Index from_idx, Index to_idx)
{
seL4_CNode const service = sel();
seL4_Word const dest_index = to_idx;
seL4_CNode const service = sel().value();
seL4_Word const dest_index = to_idx.value();
uint8_t const dest_depth = size_log2();
seL4_CNode const src_root = from.sel();
seL4_Word const src_index = from_idx;
seL4_CNode const src_root = from.sel().value();
seL4_Word const src_index = from_idx.value();
uint8_t const src_depth = from.size_log2();
seL4_CapRights const rights = seL4_AllRights;
int const ret = seL4_CNode_Copy(service, dest_index, dest_depth,
src_root, src_index, src_depth, rights);
if (ret != 0) {
PWRN("%s: seL4_CNode_Copy (0x%x) returned %d", __FUNCTION__,
from_idx, ret);
PWRN("%s: seL4_CNode_Copy (0x%lx) returned %d", __FUNCTION__,
from_idx.value(), ret);
}
}
void copy(Cnode_base const &from, unsigned idx) { copy(from, idx, idx); }
void copy(Cnode_base const &from, Index idx) { copy(from, idx, idx); }
/**
* Delete selector from CNode
*/
void remove(unsigned idx)
void remove(Index idx)
{
seL4_CNode_Delete(sel(), idx, size_log2());
seL4_CNode_Delete(sel().value(), idx.value(), size_log2());
}
/**
* Move selector from another CNode
*/
void move(Cnode_base const &from, unsigned from_idx, unsigned to_idx)
void move(Cnode_base const &from, Index from_idx, Index to_idx)
{
seL4_CNode const service = sel();
seL4_Word const dest_index = to_idx;
seL4_CNode const service = sel().value();
seL4_Word const dest_index = to_idx.value();
uint8_t const dest_depth = size_log2();
seL4_CNode const src_root = from.sel();
seL4_Word const src_index = from_idx;
seL4_CNode const src_root = from.sel().value();
seL4_Word const src_index = from_idx.value();
uint8_t const src_depth = from.size_log2();
int const ret = seL4_CNode_Move(service, dest_index, dest_depth,
src_root, src_index, src_depth);
if (ret != 0) {
PWRN("%s: seL4_CNode_Move (0x%x) returned %d", __FUNCTION__,
from_idx, ret);
PWRN("%s: seL4_CNode_Move (0x%lx) returned %d", __FUNCTION__,
from_idx.value(), ret);
}
}
void move(Cnode_base const &from, unsigned idx) { move(from, idx, idx); }
void move(Cnode_base const &from, Index idx) { move(from, idx, idx); }
/**
* Constructor
*/
Cnode_base(unsigned sel, size_t size_log2)
Cnode_base(Cap_sel sel, size_t size_log2)
: _sel(sel), _size_log2(size_log2) { }
};
@ -122,14 +124,37 @@ class Genode::Cnode : public Cnode_base, Noncopyable
*
* \throw Phys_alloc_failed
* \throw Untyped_address::Lookup_failed
*
* \deprecated
*/
Cnode(unsigned parent_sel, unsigned dst_idx, size_t size_log2,
Cnode(Cap_sel parent_sel, Index dst_idx, size_t size_log2,
Range_allocator &phys_alloc)
:
Cnode_base(dst_idx, size_log2)
{
Kernel_object::create<Kernel_object::Cnode>(phys_alloc, parent_sel,
dst_idx, size_log2);
create<Cnode_kobj>(phys_alloc, parent_sel, dst_idx, size_log2);
}
/**
* Constructor
*
* \param parent_sel CNode where to place the cap selector of the
* new CNode
* \param dst_idx designated index within 'parent_sel' referring to
* the created CNode
* \param size_log2 number of entries in CNode
* \param untyped_pool initial untyped memory pool used for allocating
* the CNode backing store
*
* \throw Retype_untyped_failed
* \throw Initial_untyped_pool::Initial_untyped_pool_exhausted
*/
Cnode(Cap_sel parent_sel, Index dst_idx, size_t size_log2,
Initial_untyped_pool &untyped_pool)
:
Cnode_base(dst_idx, size_log2)
{
create<Cnode_kobj>(untyped_pool, parent_sel, dst_idx, size_log2);
}
~Cnode()

View File

@ -36,15 +36,18 @@ class Genode::Core_cspace
CORE_PAD_CNODE_SEL = 0xa01,
CORE_CNODE_SEL = 0xa02,
PHYS_CNODE_SEL = 0xa03,
CORE_VM_PAD_CNODE_SEL = 0xa04,
CORE_VM_CNODE_SEL = 0xa05,
UNTYPED_CNODE_SEL = 0xa04,
CORE_STATIC_SEL_END,
};
/* indices within top-level CNode */
enum Top_cnode_idx {
TOP_CNODE_CORE_IDX = 0,
TOP_CNODE_PHYS_IDX = 0xfff
TOP_CNODE_CORE_IDX = 0,
/* XXX mark last index usable for PDs */
TOP_CNODE_UNTYPED_IDX = 0xffe, /* untyped memory pages */
TOP_CNODE_PHYS_IDX = 0xfff /* phyical page frames */
};
enum { CORE_VM_ID = 1 };

View File

@ -0,0 +1,219 @@
/*
* \brief Initial pool of untyped memory
* \author Norman Feske
* \date 2016-02-11
*/
/*
* Copyright (C) 2016 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#ifndef _CORE__INCLUDE__INITIAL_UNTYPED_POOL_H_
#define _CORE__INCLUDE__INITIAL_UNTYPED_POOL_H_
/* Genode includes */
#include <base/exception.h>
#include <base/printf.h>
/* core-local includes */
#include <sel4_boot_info.h>
/* seL4 includes */
#include <sel4/sel4.h>
namespace Genode { class Initial_untyped_pool; }
class Genode::Initial_untyped_pool
{
private:
/* base limit on sel4's autoconf.h */
enum { MAX_UNTYPED = (unsigned)CONFIG_MAX_NUM_BOOTINFO_UNTYPED_CAPS };
struct Free_offset { addr_t value = 0; };
Free_offset _free_offset[MAX_UNTYPED];
public:
class Initial_untyped_pool_exhausted : Exception { };
struct Range
{
/* core-local cap selector */
unsigned const sel;
/* index into 'untypedSizeBitsList' */
unsigned const index = sel - sel4_boot_info().untyped.start;
/* original size of untyped memory range */
size_t const size = 1UL << sel4_boot_info().untypedSizeBitsList[index];
/* physical address of the begin of the untyped memory range */
addr_t const phys = sel4_boot_info().untypedPaddrList[index];
/* offset to the unused part of the untyped memory range */
addr_t &free_offset;
Range(Initial_untyped_pool &pool, unsigned sel)
:
sel(sel), free_offset(pool._free_offset[index].value)
{ }
};
Initial_untyped_pool()
{
size_t total_bytes = 0;
PINF("initial untyped pool:");
for_each_range([&] (Range const &range) {
total_bytes += range.size;
PINF(" [%u] phys=0x%lx size=0x%zx",
range.sel, range.phys, range.size);
});
PINF(" total: %zd bytes", total_bytes);
}
/**
* Apply functor to each untyped memory range
*
* The functor is called with 'Range &' as argument.
*/
template <typename FUNC>
void for_each_range(FUNC const &func)
{
seL4_BootInfo const &bi = sel4_boot_info();
for (unsigned sel = bi.untyped.start; sel < bi.untyped.end; sel++) {
Range range(*this, sel);
func(range);
}
}
/**
* Return selector of untyped memory range where the allocation of
* the specified size is possible
*
* \param kernel object size
*
* This function models seL4's allocation policy of untyped memory. It
* is solely used at boot time to setup core's initial kernel objects
* from the initial pool of untyped memory ranges as reported by the
* kernel.
*
* \throw Initial_untyped_pool_exhausted
*/
unsigned alloc(size_t size_log2)
{
enum { UNKNOWN = 0 };
unsigned sel = UNKNOWN;
/*
* Go through the known initial untyped memory ranges to find
* a range that is able to host a kernel object of 'size'.
*/
for_each_range([&] (Range &range) {
if (sel != UNKNOWN)
return;
/*
* The seL4 kernel naturally aligns allocations within untuped
* memory ranges. So we have to apply the same policy to our
* shadow version of the kernel's 'FreeIndex'.
*/
addr_t const aligned_free_offset = align_addr(range.free_offset, size_log2);
/* calculate free index after allocation */
addr_t const new_free_offset = aligned_free_offset + (1 << size_log2);
/* check if allocation fits within current untyped memory range */
if (new_free_offset <= range.size) {
/*
* We found a matching range, consume 'size' and report the
* selector. The returned selector is used by the caller
* of 'alloc' to perform the actual kernel-object creation.
*/
range.free_offset = new_free_offset;
PDBG("alloc 0x%lx bytes from %u -> free index 0x%lx",
1UL << size_log2, range.sel, range.free_offset);
/* return selector is matching range */
sel = range.sel;
}
});
if (sel == UNKNOWN)
throw Initial_untyped_pool_exhausted();
return sel;
}
/**
* Convert remainder of the initial untyped memory into untyped pages
*/
void turn_remainder_into_untyped_pages()
{
for_each_range([&] (Range &range) {
/*
* The kernel limits the maximum number of kernel objects to
* be created via a single untyped-retype operation. So we
* need to iterate for each range, converting a limited batch
* of pages in each step.
*/
for (;;) {
addr_t const page_aligned_free_offset =
align_addr(range.free_offset, get_page_size_log2());
/* back out if no further page can be allocated */
if (page_aligned_free_offset + get_page_size() > range.size)
return;
size_t const remaining_size = range.size - page_aligned_free_offset;
size_t const retype_size_limit = get_page_size()*256;
size_t const batch_size = min(remaining_size, retype_size_limit);
/* mark consumed untyped memory range as allocated */
range.free_offset += batch_size;
addr_t const phys_addr = range.phys + page_aligned_free_offset;
size_t const num_pages = batch_size / get_page_size();
seL4_Untyped const service = range.sel;
int const type = seL4_UntypedObject;
int const size_bits = get_page_size_log2();
seL4_CNode const root = Core_cspace::TOP_CNODE_SEL;
int const node_index = Core_cspace::TOP_CNODE_UNTYPED_IDX;
int const node_depth = Core_cspace::NUM_TOP_SEL_LOG2;
int const node_offset = phys_addr >> get_page_size_log2();
int const num_objects = num_pages;
int const ret = seL4_Untyped_Retype(service,
type,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
if (ret != 0) {
PERR("%s: seL4_Untyped_Retype (untyped) returned %d",
__FUNCTION__, ret);
return;
}
}
});
}
};
#endif /* _CORE__INCLUDE__INITIAL_UNTYPED_POOL_H_ */

View File

@ -14,50 +14,64 @@
#ifndef _CORE__INCLUDE__KERNEL_OBJECT_H_
#define _CORE__INCLUDE__KERNEL_OBJECT_H_
/* Genode includes */
#include <base/exception.h>
/* core includes */
#include <untyped_memory.h>
#include <initial_untyped_pool.h>
namespace Kernel_object {
namespace Genode {
using Genode::Untyped_address;
using Genode::Untyped_memory;
using Genode::Range_allocator;
/**
* Index referring to a slot in a CNode
*/
struct Cnode_index : Cap_sel
{
explicit Cnode_index(addr_t value) : Cap_sel(value) { }
struct Tcb
Cnode_index(Cap_sel sel) : Cap_sel(sel.value()) { }
};
struct Tcb_kobj
{
enum { SEL4_TYPE = seL4_TCBObject, SIZE_LOG2 = 12 };
static char const *name() { return "TCB"; }
};
struct Endpoint
struct Endpoint_kobj
{
enum { SEL4_TYPE = seL4_EndpointObject, SIZE_LOG2 = 4 };
static char const *name() { return "endpoint"; }
};
struct Cnode
struct Cnode_kobj
{
enum { SEL4_TYPE = seL4_CapTableObject, SIZE_LOG2 = 4 };
static char const *name() { return "cnode"; }
};
struct Page_table
struct Page_table_kobj
{
enum { SEL4_TYPE = seL4_IA32_PageTableObject, SIZE_LOG2 = 12 };
static char const *name() { return "page table"; }
};
struct Page_directory
struct Page_directory_kobj
{
enum { SEL4_TYPE = seL4_IA32_PageDirectoryObject, SIZE_LOG2 = 12 };
static char const *name() { return "page directory"; }
};
struct Retype_untyped_failed : Genode::Exception { };
/**
* Create kernel object from untyped memory
*
@ -69,47 +83,106 @@ namespace Kernel_object {
* \param size_log2 size of kernel object in bits, only needed for
* variable-sized objects like CNodes
*
* \return physical address of created kernel object
*
* \throw Phys_alloc_failed
* \throw Untyped_address::Lookup_failed
* \throw Retype_untyped_failed
*
* The kernel-object description is a policy type that contains enum
* definitions for 'SEL4_TYPE' and 'SIZE_LOG2', and a static function
* 'name' that returns the type name as a char const pointer.
*
* XXX to be removed
*/
template <typename KOBJ>
static addr_t create(Range_allocator &phys_alloc,
Cap_sel dst_cnode_sel,
Cnode_index dst_idx,
size_t size_log2 = 0)
{
/* allocate physical page */
addr_t phys_addr = Untyped_memory::alloc_page(phys_alloc);
seL4_Untyped const service = Untyped_memory::untyped_sel(phys_addr).value();
int const type = KOBJ::SEL4_TYPE;
int const size_bits = size_log2;
seL4_CNode const root = dst_cnode_sel.value();
int const node_index = 0;
int const node_depth = 0;
int const node_offset = dst_idx.value();
int const num_objects = 1;
int const ret = seL4_Untyped_Retype(service,
type,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
if (ret != 0) {
PERR("seL4_Untyped_RetypeAtOffset (%s) returned %d",
KOBJ::name(), ret);
throw Retype_untyped_failed();
}
PLOG("created kernel object '%s' at 0x%lx -> root=%lu index=%lu",
KOBJ::name(), phys_addr, dst_cnode_sel.value(), dst_idx.value());
return phys_addr;
}
/**
* Create kernel object from initial untyped memory pool
*
* \param KOBJ kernel-object description
* \param untyped_pool initial untyped memory pool
* \param dst_cnode_sel CNode selector where to store the cap pointing to
* the new kernel object
* \param dst_idx designated index of cap selector within 'dst_cnode'
* \param size_log2 size of kernel object in bits, only needed for
* variable-sized objects like CNodes
*
* \throw Initial_untyped_pool::Initial_untyped_pool_exhausted
* \throw Retype_untyped_failed
*
* The kernel-object description is a policy type that contains enum
* definitions for 'SEL4_TYPE' and 'SIZE_LOG2', and a static function
* 'name' that returns the type name as a char const pointer.
*/
template <typename KOBJ>
static Untyped_address create(Range_allocator &phys_alloc,
unsigned dst_cnode_sel,
unsigned dst_idx,
size_t size_log2 = 0)
static void create(Initial_untyped_pool &untyped_pool,
Cap_sel dst_cnode_sel,
Cnode_index dst_idx,
size_t size_log2 = 0)
{
Untyped_address const untyped_addr =
Untyped_memory::alloc_log2(phys_alloc, KOBJ::SIZE_LOG2 + size_log2);
unsigned const sel = untyped_pool.alloc(KOBJ::SIZE_LOG2 + size_log2);
seL4_Untyped const service = untyped_addr.sel();
seL4_Untyped const service = sel;
int const type = KOBJ::SEL4_TYPE;
int const offset = untyped_addr.offset();
int const size_bits = size_log2;
seL4_CNode const root = dst_cnode_sel;
seL4_CNode const root = dst_cnode_sel.value();
int const node_index = 0;
int const node_depth = 0;
int const node_offset = dst_idx;
int const node_offset = dst_idx.value();
int const num_objects = 1;
int const ret = seL4_Untyped_RetypeAtOffset(service,
type,
offset,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
int const ret = seL4_Untyped_Retype(service,
type,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
if (ret != 0)
PERR("seL4_Untyped_RetypeAtOffset (%s) returned %d",
if (ret != 0) {
PERR("seL4_Untyped_Retype (%s) returned %d",
KOBJ::name(), ret);
return untyped_addr;
throw Retype_untyped_failed();
}
}
};

View File

@ -21,6 +21,7 @@
/* core includes */
#include <util.h>
#include <cap_sel_alloc.h>
namespace Genode { class Page_table_registry; }
@ -109,12 +110,17 @@ class Genode::Page_table_registry
}
};
class Slab_block : public Genode::Slab_block { long _data[4*1024]; };
struct Slab_block : Genode::Slab_block
{
long _data[4*1024];
Slab_block(Genode::Slab &slab) : Genode::Slab_block(&slab) { }
};
template <typename T>
struct Slab : Genode::Tslab<T, sizeof(Slab_block)>
{
Slab_block _initial_block;
Slab_block _initial_block { *this };
Slab(Allocator &md_alloc)
:
@ -122,9 +128,13 @@ class Genode::Page_table_registry
{ }
};
Allocator &_md_alloc;
Slab<Page_table> _page_table_slab;
Slab<Page_table::Entry> _page_table_entry_slab;
/* state variable to prevent nested attempts to extend the slab-pool */
bool _extending_page_table_entry_slab = false;
List<Page_table> _page_tables;
static addr_t _page_table_base(addr_t addr)
@ -153,6 +163,56 @@ class Genode::Page_table_registry
static constexpr bool verbose = false;
void _preserve_page_table_entry_slab_space()
{
/*
* Eagerly extend the pool of slab blocks if we run out of slab
* entries.
*
* At all times we have to ensure that the slab allocator has
* enough free entries to host the meta data needed for mapping a
* new slab block because such a new slab block will indeed result
* in the creation of further page-table entries. We have to
* preserve at least as many slab entries as the number of
* page-table entries used by a single slab block.
*
* In the computation of 'PRESERVED', the 1 accounts for the bits
* truncated by the division by page size. The 3 accounts for the
* slab's built-in threshold for extending the slab, which we need
* to avoid triggering (as this would result in just another
* nesting level).
*/
enum { PRESERVED = sizeof(Slab_block)/get_page_size() + 1 + 3 };
/* back out if there is still enough room */
if (_page_table_entry_slab.num_free_entries_higher_than(PRESERVED))
return;
/* back out on a nested call, while extending the slab */
if (_extending_page_table_entry_slab)
return;
_extending_page_table_entry_slab = true;
try {
/*
* Create new slab block. Note that we are entering a rat
* hole here as this operation will result in the nested
* call of 'map_local'.
*/
Slab_block *sb = new (_md_alloc) Slab_block(_page_table_entry_slab);
_page_table_entry_slab.insert_sb(sb);
} catch (Allocator::Out_of_memory) {
/* this should never happen */
PERR("Out of memory while allocating page-table meta data");
}
_extending_page_table_entry_slab = false;
}
public:
/**
@ -162,6 +222,7 @@ class Genode::Page_table_registry
*/
Page_table_registry(Allocator &md_alloc)
:
_md_alloc(md_alloc),
_page_table_slab(md_alloc),
_page_table_entry_slab(md_alloc)
{ }
@ -172,8 +233,10 @@ class Genode::Page_table_registry
* \param addr virtual address
* \param sel page-table selector
*/
void insert_page_table(addr_t addr, unsigned sel)
void insert_page_table(addr_t addr, Cap_sel sel)
{
/* XXX sel is unused */
if (_page_table_exists(addr)) {
PWRN("trying to insert page table for 0x%lx twice", addr);
return;
@ -197,6 +260,8 @@ class Genode::Page_table_registry
*/
void insert_page_table_entry(addr_t addr, unsigned sel)
{
_preserve_page_table_entry_slab_space();
_lookup(addr).insert_entry(_page_table_entry_slab, addr, sel);
}

View File

@ -22,6 +22,7 @@
#include <core_mem_alloc.h>
#include <vm_space.h>
#include <core_cspace.h>
#include <initial_untyped_pool.h>
namespace Genode { class Platform; }
@ -35,6 +36,8 @@ class Genode::Platform : public Platform_generic
Phys_allocator _io_port_alloc; /* I/O port allocator */
Phys_allocator _irq_alloc; /* IRQ allocator */
Initial_untyped_pool _initial_untyped_pool;
/*
* Allocator for tracking unused physical addresses, which is used
* to allocate a range within the phys CNode for ROM modules.
@ -46,10 +49,6 @@ class Genode::Platform : public Platform_generic
Rom_fs _rom_fs; /* ROM file system */
/**
* Shortcut for physical memory allocator
*/
Range_allocator &_phys_alloc = *_core_mem_alloc.phys_alloc();
/**
* Virtual address range usable by non-core processes
@ -57,12 +56,6 @@ class Genode::Platform : public Platform_generic
addr_t _vm_base;
size_t _vm_size;
/**
* Initialize core allocators
*/
void _init_allocators();
bool const _init_allocators_done;
/*
* Until this point, no interaction with the seL4 kernel was needed.
* However, the next steps involve the invokation of system calls and
@ -73,28 +66,62 @@ class Genode::Platform : public Platform_generic
bool const _init_sel4_ipc_buffer_done;
/* allocate 1st-level CNode */
Cnode _top_cnode { seL4_CapInitThreadCNode, Core_cspace::TOP_CNODE_SEL,
Core_cspace::NUM_TOP_SEL_LOG2, _phys_alloc };
Cnode _top_cnode { Cap_sel(seL4_CapInitThreadCNode),
Cnode_index(Core_cspace::TOP_CNODE_SEL),
Core_cspace::NUM_TOP_SEL_LOG2,
_initial_untyped_pool };
/* allocate 2nd-level CNode to align core's CNode with the LSB of the CSpace*/
Cnode _core_pad_cnode { seL4_CapInitThreadCNode, Core_cspace::CORE_PAD_CNODE_SEL,
Cnode _core_pad_cnode { Cap_sel(seL4_CapInitThreadCNode),
Cnode_index(Core_cspace::CORE_PAD_CNODE_SEL),
Core_cspace::NUM_CORE_PAD_SEL_LOG2,
_phys_alloc };
_initial_untyped_pool };
/* allocate 3rd-level CNode for core's objects */
Cnode _core_cnode { seL4_CapInitThreadCNode, Core_cspace::CORE_CNODE_SEL,
Core_cspace::NUM_CORE_SEL_LOG2, _phys_alloc };
Cnode _core_cnode { Cap_sel(seL4_CapInitThreadCNode),
Cnode_index(Core_cspace::CORE_CNODE_SEL),
Core_cspace::NUM_CORE_SEL_LOG2, _initial_untyped_pool };
/* allocate 2nd-level CNode for storing page-frame cap selectors */
Cnode _phys_cnode { seL4_CapInitThreadCNode, Core_cspace::PHYS_CNODE_SEL,
Core_cspace::NUM_PHYS_SEL_LOG2, _phys_alloc };
Cnode _phys_cnode { Cap_sel(seL4_CapInitThreadCNode),
Cnode_index(Core_cspace::PHYS_CNODE_SEL),
Core_cspace::NUM_PHYS_SEL_LOG2, _initial_untyped_pool };
struct Core_sel_alloc : Bit_allocator<1 << Core_cspace::NUM_PHYS_SEL_LOG2>
/* allocate 2nd-level CNode for storing cap selectors for untyped pages */
Cnode _untyped_cnode { Cap_sel(seL4_CapInitThreadCNode),
Cnode_index(Core_cspace::UNTYPED_CNODE_SEL),
Core_cspace::NUM_PHYS_SEL_LOG2, _initial_untyped_pool };
/*
* XXX Consider making Bit_allocator::_reserve public so that we can
* turn the bit allocator into a private member of 'Core_sel_alloc'.
*/
typedef Bit_allocator<1 << Core_cspace::NUM_PHYS_SEL_LOG2> Core_sel_bit_alloc;
struct Core_sel_alloc : Cap_sel_alloc, private Core_sel_bit_alloc
{
Core_sel_alloc() { _reserve(0, Core_cspace::CORE_STATIC_SEL_END); }
} _core_sel_alloc;
Lock _lock;
Lock _core_sel_alloc_lock;
Core_sel_alloc() { _reserve(0, Core_cspace::CORE_STATIC_SEL_END); }
Cap_sel alloc() override
{
Lock::Guard guard(_lock);
try {
return Cap_sel(Core_sel_bit_alloc::alloc()); }
catch (Bit_allocator::Out_of_indices) {
throw Alloc_failed(); }
}
void free(Cap_sel sel) override
{
Lock::Guard guard(_lock);
Core_sel_bit_alloc::free(sel.value());
}
} _core_sel_alloc;
/**
* Replace initial CSpace with custom CSpace layout
@ -112,6 +139,20 @@ class Genode::Platform : public Platform_generic
void _init_core_page_table_registry();
bool const _init_core_page_table_registry_done;
Cap_sel _init_asid_pool();
Cap_sel const _asid_pool_sel = _init_asid_pool();
/**
* Shortcut for physical memory allocator
*/
Range_allocator &_phys_alloc = *_core_mem_alloc.phys_alloc();
/**
* Initialize core allocators
*/
void _init_allocators();
bool const _init_allocators_done;
Vm_space _core_vm_space;
void _init_rom_modules();
@ -144,10 +185,12 @@ class Genode::Platform : public Platform_generic
Vm_space &core_vm_space() { return _core_vm_space; }
unsigned alloc_core_sel();
Cap_sel_alloc &core_sel_alloc() { return _core_sel_alloc; }
unsigned alloc_core_rcv_sel();
void reset_sel(unsigned sel);
void free_core_sel(unsigned sel);
Cap_sel asid_pool() const { return _asid_pool_sel; }
void wait_for_exit();
};

View File

@ -36,17 +36,13 @@ class Genode::Platform_pd : public Address_space
Page_table_registry _page_table_registry;
unsigned const _vm_pad_cnode_sel;
unsigned const _vm_cnode_sel;
unsigned const _page_directory_sel;
Untyped_address _init_page_directory();
Untyped_address const _page_directory = _init_page_directory();
Cap_sel const _page_directory_sel;
addr_t _init_page_directory();
addr_t const _page_directory = _init_page_directory();
Vm_space _vm_space;
unsigned const _cspace_cnode_sel;
Cap_sel const _cspace_cnode_sel;
Cnode _cspace_cnode;
@ -108,13 +104,13 @@ class Genode::Platform_pd : public Address_space
** seL4-specific interface **
*****************************/
unsigned alloc_sel();
Cap_sel alloc_sel();
void free_sel(unsigned sel);
void free_sel(Cap_sel sel);
Cnode &cspace_cnode() { return _cspace_cnode; }
unsigned page_directory_sel() const { return _page_directory_sel; }
Cap_sel page_directory_sel() const { return _page_directory_sel; }
size_t cspace_size_log2() { return CSPACE_SIZE_LOG2; }

View File

@ -51,15 +51,15 @@ class Genode::Platform_thread : public List<Platform_thread>::Element
Thread_info _info;
unsigned const _pager_obj_sel;
Cap_sel const _pager_obj_sel;
/*
* Selectors within the PD's CSpace
*
* Allocated when the thread is started.
*/
unsigned _fault_handler_sel = 0;
unsigned _ep_sel = 0;
Cap_sel _fault_handler_sel { 0 };
Cap_sel _ep_sel { 0 };
friend class Platform_pd;
@ -146,7 +146,7 @@ class Genode::Platform_thread : public List<Platform_thread>::Element
/**
* Return identification of thread when faulting
*/
unsigned long pager_object_badge() const { return _pager_obj_sel; }
unsigned long pager_object_badge() const { return _pager_obj_sel.value(); }
/**
* Set the executing CPU for this thread
@ -173,7 +173,7 @@ class Genode::Platform_thread : public List<Platform_thread>::Element
** seL4-specific interface **
*****************************/
unsigned tcb_sel() const { return _info.tcb_sel; }
Cap_sel tcb_sel() const { return _info.tcb_sel; }
void install_mapping(Mapping const &mapping);
};

View File

@ -0,0 +1,105 @@
/*
* \brief Reentrant lock
* \author Norman Feske
* \date 2015-05-04
*
* Generally, well-designed software should not require a reentrant lock.
* However, the circular dependency between core's virtual address space and
* the backing store needed for managing the meta data of core's page tables
* and page table entries cannot easily be dissolved otherwise.
*/
/*
* Copyright (C) 2015 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#ifndef _CORE__INCLUDE__REENTRANT_LOCK_H_
#define _CORE__INCLUDE__REENTRANT_LOCK_H_
/* Genode includes */
#include <base/lock.h>
namespace Genode { struct Reentrant_lock; }
struct Genode::Reentrant_lock
{
Lock _lock;
struct Guard : List<Guard>::Element, Noncopyable
{
Reentrant_lock &reentrant_lock;
Thread_base const * const myself = Thread_base::myself();
Guard(Reentrant_lock &reentrant_lock)
:
reentrant_lock(reentrant_lock)
{
/*
* Don't do anything if we are in a nested call
*/
if (reentrant_lock._myself_already_registered())
return;
/*
* We are the top-level caller. Register ourself at
* the '_callers' list so that nested calls will be
* able to detect us (by calling '_top_level_caller'.
*/
{
Lock::Guard guard(reentrant_lock._callers_lock);
reentrant_lock._callers.insert(this);
}
/*
* Since we are the initial caller, we can safely take
* the lock without risking a deadlock.
*/
reentrant_lock._lock.lock();
}
~Guard()
{
if (!reentrant_lock._registered(this))
return;
Lock::Guard guard(reentrant_lock._callers_lock);
reentrant_lock._callers.remove(this);
reentrant_lock._lock.unlock();
}
};
Lock _callers_lock;
List<Guard> _callers;
bool _myself_already_registered()
{
Lock::Guard guard(_callers_lock);
Thread_base const * const myself = Thread_base::myself();
for (Guard *c = _callers.first(); c; c = c->next())
if (c->myself == myself)
return true;
return false;
}
bool _registered(Guard const * const caller)
{
Lock::Guard guard(_callers_lock);
for (Guard *c = _callers.first(); c; c = c->next())
if (c == caller)
return true;
return false;
}
};
#endif /* _CORE__INCLUDE__REENTRANT_LOCK_H_ */

View File

@ -0,0 +1,39 @@
/*
* \brief Access to seL4 boot info
* \author Norman Feske
* \date 2015-05-04
*/
/*
* Copyright (C) 2015 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#ifndef _CORE__INCLUDE__SEL4_BOOT_INFO_H_
#define _CORE__INCLUDE__SEL4_BOOT_INFO_H_
/* Genode includes */
#include <base/stdint.h>
/* seL4 includes */
#include <sel4/bootinfo.h>
/* provided by the assembly startup code */
extern Genode::addr_t __initial_bx;
namespace Genode {
/**
* Obtain seL4 boot info structure
*/
static inline seL4_BootInfo const &sel4_boot_info()
{
return *(seL4_BootInfo const *)__initial_bx;
}
}
#endif /* _CORE__INCLUDE__SEL4_BOOT_INFO_H_ */

View File

@ -26,17 +26,18 @@
/* core includes */
#include <map_local.h>
#include <kernel_object.h>
#include <untyped_memory.h>
namespace Genode {
struct Thread_info
{
unsigned tcb_sel = 0;
unsigned ep_sel = 0;
Cap_sel tcb_sel { 0 };
Cap_sel ep_sel { 0 };
addr_t ipc_buffer_phys = 0;
inline void write_thread_info_to_ipc_buffer(unsigned pd_ep_sel);
inline void write_thread_info_to_ipc_buffer(Cap_sel pd_ep_sel);
Thread_info() { }
@ -47,7 +48,7 @@ namespace Genode {
* Set register values for the instruction pointer and stack pointer and
* start the seL4 thread
*/
static inline void start_sel4_thread(unsigned tcb_sel, addr_t ip, addr_t sp);
static inline void start_sel4_thread(Cap_sel tcb_sel, addr_t ip, addr_t sp);
};
@ -57,42 +58,34 @@ void Genode::Thread_info::init(addr_t const utcb_virt_addr)
Range_allocator &phys_alloc = *platform.ram_alloc();
/* create IPC buffer of one page */
size_t const ipc_buffer_size_log2 = get_page_size_log2();
Untyped_address ipc_buffer =
Untyped_memory::alloc_log2(phys_alloc, ipc_buffer_size_log2);
ipc_buffer_phys = ipc_buffer.phys();
Untyped_memory::convert_to_page_frames(ipc_buffer.phys(), 1);
ipc_buffer_phys = Untyped_memory::alloc_page(phys_alloc);
Untyped_memory::convert_to_page_frames(ipc_buffer_phys, 1);
/* allocate TCB within core's CNode */
tcb_sel = platform.alloc_core_sel();
Kernel_object::create<Kernel_object::Tcb>(phys_alloc,
platform.core_cnode().sel(),
tcb_sel);
tcb_sel = platform.core_sel_alloc().alloc();
create<Tcb_kobj>(phys_alloc, platform.core_cnode().sel(), tcb_sel);
/* allocate synchronous endpoint within core's CNode */
ep_sel = platform.alloc_core_sel();
Kernel_object::create<Kernel_object::Endpoint>(phys_alloc,
platform.core_cnode().sel(),
ep_sel);
ep_sel = platform.core_sel_alloc().alloc();
create<Endpoint_kobj>(phys_alloc, platform.core_cnode().sel(), ep_sel);
/* assign IPC buffer to thread */
{
/* determine page frame selector of the allocated IPC buffer */
unsigned ipc_buffer_sel = Untyped_memory::frame_sel(ipc_buffer.phys());
Cap_sel ipc_buffer_sel = Untyped_memory::frame_sel(ipc_buffer_phys);
int const ret = seL4_TCB_SetIPCBuffer(tcb_sel, utcb_virt_addr,
ipc_buffer_sel);
int const ret = seL4_TCB_SetIPCBuffer(tcb_sel.value(), utcb_virt_addr,
ipc_buffer_sel.value());
ASSERT(ret == 0);
}
/* set scheduling priority */
enum { PRIORITY_MAX = 0xff };
seL4_TCB_SetPriority(tcb_sel, PRIORITY_MAX);
seL4_TCB_SetPriority(tcb_sel.value(), PRIORITY_MAX);
}
void Genode::start_sel4_thread(unsigned tcb_sel, addr_t ip, addr_t sp)
void Genode::start_sel4_thread(Cap_sel tcb_sel, addr_t ip, addr_t sp)
{
/* set register values for the instruction pointer and stack pointer */
seL4_UserContext regs;
@ -103,10 +96,10 @@ void Genode::start_sel4_thread(unsigned tcb_sel, addr_t ip, addr_t sp)
regs.esp = sp;
regs.gs = IPCBUF_GDT_SELECTOR;
int const ret = seL4_TCB_WriteRegisters(tcb_sel, false, 0, num_regs, &regs);
int const ret = seL4_TCB_WriteRegisters(tcb_sel.value(), false, 0, num_regs, &regs);
ASSERT(ret == 0);
seL4_TCB_Resume(tcb_sel);
seL4_TCB_Resume(tcb_sel.value());
}
#endif /* _CORE__INCLUDE__THREAD_SEL4_H_ */

View File

@ -1,100 +0,0 @@
/*
* \brief Utilities for manipulating seL4 CNodes
* \author Norman Feske
* \date 2015-05-04
*/
/*
* Copyright (C) 2015 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#ifndef _CORE__INCLUDE__UNTYPED_ADDRESS_H_
#define _CORE__INCLUDE__UNTYPED_ADDRESS_H_
/* seL4 includes */
#include <sel4/bootinfo.h>
namespace Genode { struct Untyped_address; }
/**
* Obtain seL4 boot info structure
*/
static inline seL4_BootInfo const &sel4_boot_info()
{
extern Genode::addr_t __initial_bx;
return *(seL4_BootInfo const *)__initial_bx;
}
/**
* Untuped memory address
*
* When referring to physical memory in seL4 system-call arguments, a memory
* address is specified as a tuple of an untyped memory range selector and the
* offset relative to the base address of the untyped memory range.
*/
class Genode::Untyped_address
{
private:
seL4_Untyped _sel = 0;
addr_t _offset = 0;
addr_t _phys = 0;
void _init(seL4_BootInfo const &bi, addr_t phys_addr, size_t size,
unsigned const start_idx, unsigned const num_idx)
{
for (unsigned i = start_idx; i < start_idx + num_idx; i++) {
/* index into 'untypedPaddrList' and 'untypedSizeBitsList' */
unsigned const k = i - bi.untyped.start;
addr_t const untyped_base = bi.untypedPaddrList[k];
size_t const untyped_size = 1UL << bi.untypedSizeBitsList[k];
if (phys_addr >= untyped_base
&& phys_addr + size - 1 <= untyped_base + untyped_size - 1) {
_sel = i;
_offset = phys_addr - untyped_base;
return;
}
}
}
public:
class Lookup_failed { };
/**
* Construct untyped address for given physical address range
*
* \throw Lookup_failed
*/
Untyped_address(addr_t phys_addr, size_t size)
{
_phys = phys_addr;
seL4_BootInfo const &bi = sel4_boot_info();
_init(bi, phys_addr, size, bi.untyped.start,
bi.untyped.end - bi.untyped.start);
/* XXX handle untyped device addresses */
if (_sel == 0) {
PERR("could not find untyped address for 0x%lx", phys_addr);
throw Lookup_failed();
}
}
unsigned sel() const { return _sel; }
addr_t offset() const { return _offset; }
addr_t phys() const { return _phys; }
};
#endif /* _CORE__INCLUDE__UNTYPED_ADDRESS_H_ */

View File

@ -19,10 +19,10 @@
/* core includes */
#include <util.h>
#include <untyped_address.h>
#include <cap_sel_alloc.h>
/* seL4 includes */
#include <sel4/interfaces/sel4_client.h>
#include <sel4/sel4.h>
namespace Genode { struct Untyped_memory; }
@ -31,51 +31,56 @@ struct Genode::Untyped_memory
class Phys_alloc_failed : Exception { };
/**
* Allocate natually-aligned physical memory for seL4 kernel object
*
* \throw Phys_alloc_failed
* \throw Untyped_address::Lookup_failed
*/
static inline Untyped_address alloc_log2(Range_allocator &phys_alloc,
size_t const size_log2)
static inline addr_t alloc_pages(Range_allocator &phys_alloc, size_t num_pages)
{
/*
* The natual alignment is needed to ensure that the backing store is
* contained in a single untyped memory region.
*/
void *out_ptr = nullptr;
size_t const size = 1UL << size_log2;
Range_allocator::Alloc_return alloc_ret =
phys_alloc.alloc_aligned(size, &out_ptr, size_log2);
addr_t const phys_addr = (addr_t)out_ptr;
phys_alloc.alloc_aligned(num_pages*get_page_size(), &out_ptr,
get_page_size_log2());
if (alloc_ret.is_error()) {
PERR("%s: allocation of untyped memory failed", __FUNCTION__);
throw Phys_alloc_failed();
}
return Untyped_address(phys_addr, size);
return (addr_t)out_ptr;
}
static inline addr_t alloc_page(Range_allocator &phys_alloc)
{
return alloc_pages(phys_alloc, 1);
}
/**
* Allocate natually aligned physical memory
*
* \param size size in bytes
* Local utility solely used by 'untyped_sel' and 'frame_sel'
*/
static inline Untyped_address alloc(Range_allocator &phys_alloc,
size_t const size)
static inline Cap_sel _core_local_sel(Core_cspace::Top_cnode_idx top_idx,
addr_t phys_addr)
{
if (size == 0) {
PERR("%s: invalid size of 0x%zd", __FUNCTION__, size);
throw Phys_alloc_failed();
}
unsigned const upper_bits = top_idx << Core_cspace::NUM_PHYS_SEL_LOG2;
unsigned const lower_bits = phys_addr >> get_page_size_log2();
/* calculate next power-of-two size that fits the allocation size */
size_t const size_log2 = log2(size - 1) + 1;
return Cap_sel(upper_bits | lower_bits);
}
return alloc_log2(phys_alloc, size_log2);
/**
* Return core-local selector for untyped page at given physical address
*/
static inline Cap_sel untyped_sel(addr_t phys_addr)
{
return _core_local_sel(Core_cspace::TOP_CNODE_UNTYPED_IDX, phys_addr);
}
/**
* Return core-local selector for 4K page frame at given physical address
*/
static inline Cap_sel frame_sel(addr_t phys_addr)
{
return _core_local_sel(Core_cspace::TOP_CNODE_PHYS_IDX, phys_addr);
}
@ -85,42 +90,33 @@ struct Genode::Untyped_memory
static inline void convert_to_page_frames(addr_t phys_addr,
size_t num_pages)
{
size_t const size = num_pages << get_page_size_log2();
for (size_t i = 0; i < num_pages; i++, phys_addr += get_page_size()) {
/* align allocation offset to page boundary */
Untyped_address const untyped_address(align_addr(phys_addr, 12), size);
seL4_Untyped const service = untyped_sel(phys_addr).value();
int const type = seL4_IA32_4K;
int const size_bits = 0;
seL4_CNode const root = Core_cspace::TOP_CNODE_SEL;
int const node_index = Core_cspace::TOP_CNODE_PHYS_IDX;
int const node_depth = Core_cspace::NUM_TOP_SEL_LOG2;
int const node_offset = phys_addr >> get_page_size_log2();
int const num_objects = 1;
seL4_Untyped const service = untyped_address.sel();
int const type = seL4_IA32_4K;
int const offset = untyped_address.offset();
int const size_bits = 0;
seL4_CNode const root = Core_cspace::TOP_CNODE_SEL;
int const node_index = Core_cspace::TOP_CNODE_PHYS_IDX;
int const node_depth = Core_cspace::NUM_TOP_SEL_LOG2;
int const node_offset = phys_addr >> get_page_size_log2();
int const num_objects = num_pages;
int const ret = seL4_Untyped_Retype(service,
type,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
int const ret = seL4_Untyped_RetypeAtOffset(service,
type,
offset,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
if (ret != 0) {
PERR("%s: seL4_Untyped_RetypeAtOffset (IA32_4K) returned %d",
__FUNCTION__, ret);
if (ret != 0) {
PERR("%s: seL4_Untyped_RetypeAtOffset (IA32_4K) returned %d",
__FUNCTION__, ret);
return;
}
}
}
static inline unsigned frame_sel(addr_t phys_addr)
{
return (Core_cspace::TOP_CNODE_PHYS_IDX << Core_cspace::NUM_PHYS_SEL_LOG2)
| (phys_addr >> get_page_size_log2());
}
};
#endif /* _CORE__INCLUDE__UNTYPED_MEMORY_H_ */

View File

@ -16,10 +16,15 @@
/* Genode includes */
#include <util/bit_allocator.h>
#include <util/volatile_object.h>
#include <base/thread.h>
/* core includes */
#include <page_table_registry.h>
#include <cnode.h>
#include <cap_sel_alloc.h>
#include <core_cspace.h>
#include <reentrant_lock.h>
namespace Genode { class Vm_space; }
@ -28,17 +33,44 @@ class Genode::Vm_space
{
private:
Cap_sel_alloc &_cap_sel_alloc;
Page_table_registry &_page_table_registry;
unsigned const _id;
unsigned const _pd_sel;
Cap_sel const _pd_sel;
Range_allocator &_phys_alloc;
/**
* Maximum number of page tables and page frames for the VM space
*/
enum { NUM_VM_SEL_LOG2 = 13 };
enum {
/**
* Number of entries of 3rd-level VM CNode ('_vm_3rd_cnode')
*/
VM_3RD_CNODE_SIZE_LOG2 = 8,
/**
* Number of entries of each leaf CNodes
*/
LEAF_CNODE_SIZE_LOG2 = 8UL,
LEAF_CNODE_SIZE = 1UL << LEAF_CNODE_SIZE_LOG2,
/**
* Number of leaf CNodes
*/
NUM_LEAF_CNODES_LOG2 = 4UL,
NUM_LEAF_CNODES = 1UL << NUM_LEAF_CNODES_LOG2,
/**
* Maximum number of page tables and page frames for the VM space
*/
NUM_VM_SEL_LOG2 = LEAF_CNODE_SIZE_LOG2 + NUM_LEAF_CNODES_LOG2,
/**
* Number of entries of the VM padding CNode ('_vm_pad_cnode')
*/
VM_PAD_CNODE_SIZE_LOG2 = 32 - Core_cspace::NUM_TOP_SEL_LOG2
- VM_3RD_CNODE_SIZE_LOG2 - LEAF_CNODE_SIZE_LOG2,
};
Cnode &_top_level_cnode;
Cnode &_phys_cnode;
@ -49,19 +81,86 @@ class Genode::Vm_space
Cnode _vm_pad_cnode;
/**
* 3rd-level CNode for storing page-table and page-frame capabilities
* 3rd-level CNode that hosts the leaf CNodes
*/
Cnode _vm_cnode;
Cnode _vm_3rd_cnode;
/***
* 4th-level CNode for storing page-table and page-frame capabilities
*/
class Leaf_cnode
{
private:
/*
* We leave the CNode unconstructed at the time of its
* instantiation to be able to create an array of 'Leaf_cnode'
* objects (where we cannot pass any arguments to the
* constructors of the individual objects).
*/
Lazy_volatile_object<Cnode> _cnode;
public:
void construct(Cap_sel_alloc &cap_sel_alloc,
Cap_sel core_cnode_sel,
Range_allocator &phys_alloc)
{
_cnode.construct(core_cnode_sel, cap_sel_alloc.alloc(),
LEAF_CNODE_SIZE_LOG2, phys_alloc);
}
void destruct(Cap_sel_alloc &cap_sel_alloc)
{
cap_sel_alloc.free(_cnode->sel());
}
Cnode &cnode() { return *_cnode; }
};
Leaf_cnode _vm_cnodes[NUM_LEAF_CNODES];
/**
* Allocator for the selectors within '_vm_cnode'
* Allocator for the selectors within '_vm_cnodes'
*/
Bit_allocator<1UL << NUM_VM_SEL_LOG2> _sel_alloc;
Lock _lock;
/**
* Return leaf CNode that contains an index allocated from '_sel_alloc'
*/
Cnode &_leaf_cnode(unsigned idx)
{
return _vm_cnodes[idx >> LEAF_CNODE_SIZE_LOG2].cnode();
}
/**
* Return selector for a capability slot within '_vm_cnode'
* Return entry within leaf cnode
*/
Cnode_index _leaf_cnode_entry(unsigned idx) const
{
return Cnode_index(idx & (LEAF_CNODE_SIZE - 1));
}
/*
* We have to use a reentrant lock here to account for the fact that
* 'map_local' may be called recursively in some circumstances.
*
* For example, to zero-out physical memory pages, the '_clear_ds'
* method needs to locally map the pages within core's local address
* space by calling 'map_local'. The 'map_local' function populates
* core's VM space and performs the required book keeping via the
* page-table registry embedded within core's 'Vm_space' object.
* Internally, the page-table registry maintains metadata via slab
* allocators. In the corner case where a new slab block needs to be
* allocated as a side effect of adding metadata for a new page or page
* table, the slab allocator needs to make another physical memory
* range visible within core, which eventually leads to a nested call
* of 'map_local'.
*/
Reentrant_lock _lock;
/**
* Return selector for a capability slot within '_vm_cnodes'
*/
unsigned _idx_to_sel(unsigned idx) const { return (_id << 20) | idx; }
@ -76,7 +175,9 @@ class Genode::Vm_space
* This is needed because each page-frame selector can be
* inserted into only a single page table.
*/
_vm_cnode.copy(_phys_cnode, from_phys >> get_page_size_log2(), pte_idx);
_leaf_cnode(pte_idx).copy(_phys_cnode,
Cnode_index(from_phys >> get_page_size_log2()),
Cnode_index(_leaf_cnode_entry(pte_idx)));
/* remember relationship between pte_sel and the virtual address */
_page_table_registry.insert_page_table_entry(to_virt, pte_idx);
@ -86,7 +187,7 @@ class Genode::Vm_space
*/
{
seL4_IA32_Page const service = _idx_to_sel(pte_idx);
seL4_IA32_PageDirectory const pd = _pd_sel;
seL4_IA32_PageDirectory const pd = _pd_sel.value();
seL4_Word const vaddr = to_virt;
seL4_CapRights const rights = seL4_AllRights;
seL4_IA32_VMAttributes const attr = seL4_IA32_Default_VMAttributes;
@ -103,22 +204,27 @@ class Genode::Vm_space
{
/* delete copy of the mapping's page-frame selector */
_page_table_registry.apply(virt, [&] (unsigned idx) {
_vm_cnode.remove(idx);
_leaf_cnode(idx).remove(_leaf_cnode_entry(idx));
_sel_alloc.free(idx);
});
/* release meta data about the mapping */
_page_table_registry.forget_page_table_entry(virt);
}
void _map_page_table(unsigned pt_sel, addr_t to_virt)
void _map_page_table(Cap_sel pt_sel, addr_t to_virt)
{
seL4_IA32_PageTable const service = pt_sel;
seL4_IA32_PageDirectory const pd = _pd_sel;
seL4_IA32_PageTable const service = pt_sel.value();
seL4_IA32_PageDirectory const pd = _pd_sel.value();
seL4_Word const vaddr = to_virt;
seL4_IA32_VMAttributes const attr = seL4_IA32_Default_VMAttributes;
int const ret = seL4_IA32_PageTable_Map(service, pd, vaddr, attr);
PDBG("map page table 0x%lx to virt 0x%lx, pdir sel %lu",
pt_sel.value(), to_virt, _pd_sel.value());
int const ret = seL4_IA32_PageTable_Map(service, pd, vaddr, attr);
if (ret != 0)
PDBG("seL4_IA32_PageTable_Map returned %d", ret);
}
@ -138,12 +244,13 @@ class Genode::Vm_space
/* XXX account the consumed backing store */
try {
Kernel_object::create<Kernel_object::Page_table>(_phys_alloc,
_vm_cnode.sel(),
pt_idx);
create<Page_table_kobj>(_phys_alloc,
_leaf_cnode(pt_idx).sel(),
_leaf_cnode_entry(pt_idx));
} catch (...) { throw Alloc_page_table_failed(); }
unsigned const pt_sel = _idx_to_sel(pt_idx);
Cap_sel const pt_sel(_idx_to_sel(pt_idx));
_page_table_registry.insert_page_table(to_virt, pt_sel);
@ -156,17 +263,16 @@ class Genode::Vm_space
* Constructor
*
* \param pd_sel selector for page directory
* \param vm_pad_cnode_sel selector for the (2nd-level) VM pad CNode
* \param vm_cnode_sel selector for the (3rd-level) VM CNode
* \param cap_sel_alloc capability-selector allocator used for
* VM CNodes
* \param phys_alloc backing store for the CNodes
* \param top_level_cnode top-level CNode to insert 'vm_pad_cnode_sel'
* \param id ID used as index in 'top_level_cnode'
* \param page_table_registry association of VM CNode selectors with
* with virtual addresses
*/
Vm_space(unsigned pd_sel,
unsigned vm_pad_cnode_sel,
unsigned vm_cnode_sel,
Vm_space(Cap_sel pd_sel,
Cap_sel_alloc &cap_sel_alloc,
Range_allocator &phys_alloc,
Cnode &top_level_cnode,
Cnode &core_cnode,
@ -174,27 +280,49 @@ class Genode::Vm_space
unsigned id,
Page_table_registry &page_table_registry)
:
_cap_sel_alloc(cap_sel_alloc),
_page_table_registry(page_table_registry),
_id(id), _pd_sel(pd_sel),
_phys_alloc(phys_alloc),
_top_level_cnode(top_level_cnode),
_phys_cnode(phys_cnode),
_vm_pad_cnode(core_cnode.sel(), vm_pad_cnode_sel,
32 - 12 - NUM_VM_SEL_LOG2, phys_alloc),
_vm_cnode(core_cnode.sel(), vm_cnode_sel, NUM_VM_SEL_LOG2, phys_alloc)
_vm_pad_cnode(core_cnode.sel(),
Cnode_index(_cap_sel_alloc.alloc().value()),
VM_PAD_CNODE_SIZE_LOG2, phys_alloc),
_vm_3rd_cnode(core_cnode.sel(),
Cnode_index(_cap_sel_alloc.alloc().value()),
VM_3RD_CNODE_SIZE_LOG2, phys_alloc)
{
Cnode_base const cspace(seL4_CapInitThreadCNode, 32);
Cnode_base const cspace(Cap_sel(seL4_CapInitThreadCNode), 32);
for (unsigned i = 0; i < NUM_LEAF_CNODES; i++) {
/* init leaf VM CNode */
_vm_cnodes[i].construct(_cap_sel_alloc, core_cnode.sel(), phys_alloc);
/* insert leaf VM CNode into 3nd-level VM CNode */
_vm_3rd_cnode.copy(cspace, _vm_cnodes[i].cnode().sel(), Cnode_index(i));
}
/* insert 3rd-level VM CNode into 2nd-level VM-pad CNode */
_vm_pad_cnode.copy(cspace, vm_cnode_sel, 0);
_vm_pad_cnode.copy(cspace, _vm_3rd_cnode.sel(), Cnode_index(0));
/* insert 2nd-level VM-pad CNode into 1st-level CNode */
_top_level_cnode.copy(cspace, vm_pad_cnode_sel, id);
_top_level_cnode.copy(cspace, _vm_pad_cnode.sel(), Cnode_index(id));
}
~Vm_space()
{
_cap_sel_alloc.free(_vm_pad_cnode.sel());
_cap_sel_alloc.free(_vm_3rd_cnode.sel());
for (unsigned i = 0; i < NUM_LEAF_CNODES; i++)
_vm_cnodes[i].destruct(_cap_sel_alloc);
}
void map(addr_t from_phys, addr_t to_virt, size_t num_pages)
{
Lock::Guard guard(_lock);
Reentrant_lock::Guard guard(_lock);
/* check if we need to add a page table to core's VM space */
if (!_page_table_registry.has_page_table_at(to_virt))
@ -208,7 +336,7 @@ class Genode::Vm_space
void unmap(addr_t virt, size_t num_pages)
{
Lock::Guard guard(_lock);
Reentrant_lock::Guard guard(_lock);
for (size_t i = 0; i < num_pages; i++) {
off_t const offset = i << get_page_size_log2();

View File

@ -23,7 +23,7 @@
#include <internal/capability_space_sel4.h>
/* seL4 includes */
#include <sel4/interfaces/sel4_client.h>
#include <sel4/sel4.h>
#include <sel4/arch/pfIPC.h>
using namespace Genode;
@ -74,11 +74,11 @@ void Ipc_pager::reply_and_wait_for_fault()
seL4_MessageInfo_t const reply_msg = seL4_MessageInfo_new(0, 0, 0, 0);
page_fault_msg_info =
seL4_ReplyWait(Thread_base::myself()->tid().ep_sel, reply_msg, &badge);
seL4_ReplyRecv(Thread_base::myself()->tid().ep_sel, reply_msg, &badge);
} else {
page_fault_msg_info =
seL4_Wait(Thread_base::myself()->tid().ep_sel, &badge);
seL4_Recv(Thread_base::myself()->tid().ep_sel, &badge);
}
Fault_info const fault_info(page_fault_msg_info);

View File

@ -21,6 +21,7 @@
#include <platform.h>
#include <map_local.h>
#include <cnode.h>
#include <untyped_memory.h>
using namespace Genode;
@ -83,29 +84,6 @@ void Platform::_init_unused_phys_alloc()
}
/**
* Initialize allocator with untyped memory ranges according to the boot info
*/
static void init_allocator(Range_allocator &alloc,
Range_allocator &unused_phys_alloc,
seL4_BootInfo const &bi,
unsigned const start_idx, unsigned const num_idx)
{
for (unsigned i = start_idx; i < start_idx + num_idx; i++) {
/* index into 'untypedPaddrList' and 'untypedSizeBitsList' */
unsigned const k = i - bi.untyped.start;
addr_t const base = bi.untypedPaddrList[k];
size_t const size = 1UL << bi.untypedSizeBitsList[k];
alloc.add_range(base, size);
unused_phys_alloc.remove_range(base, size);
}
}
static inline void init_sel4_ipc_buffer()
{
asm volatile ("movl %0, %%gs" :: "r"(IPCBUF_GDT_SELECTOR) : "memory");
@ -114,20 +92,44 @@ static inline void init_sel4_ipc_buffer()
void Platform::_init_allocators()
{
seL4_BootInfo const &bi = sel4_boot_info();
/* interrupt allocator */
_irq_alloc.add_range(0, 255);
/* physical memory ranges */
init_allocator(*_core_mem_alloc.phys_alloc(), _unused_phys_alloc,
bi, bi.untyped.start,
bi.untyped.end - bi.untyped.start);
/*
* XXX allocate intermediate CNodes for organizing the untyped pages here
*/
/* register remaining untyped memory to physical memory allocator */
auto add_phys_range = [&] (Initial_untyped_pool::Range const &range) {
addr_t const page_aligned_offset =
align_addr(range.free_offset, get_page_size_log2());
if (page_aligned_offset >= range.size)
return;
addr_t const base = range.phys + page_aligned_offset;
size_t const size = range.size - page_aligned_offset;
PDBG("register phys mem range 0x%lx size=0x%zx", base, size);
_core_mem_alloc.phys_alloc()->add_range(base, size);
_unused_phys_alloc.remove_range(base, size);
};
_initial_untyped_pool.for_each_range(add_phys_range);
/* turn remaining untyped memory ranges into untyped pages */
_initial_untyped_pool.turn_remainder_into_untyped_pages();
/*
* From this point on, we can no longer create kernel objects from the
* '_initial_untyped_pool' because the pool is empty.
*/
/* I/O memory ranges */
init_allocator(_io_mem_alloc, _unused_phys_alloc,
bi, bi.deviceUntyped.start,
bi.deviceUntyped.end - bi.deviceUntyped.start);
// init_allocator(_io_mem_alloc, _unused_phys_alloc,
// bi, bi.deviceUntyped.start,
// bi.deviceUntyped.end - bi.deviceUntyped.start);
/* core's virtual memory */
_core_mem_alloc.virt_alloc()->add_range(_vm_base, _vm_size);
@ -161,54 +163,68 @@ void Platform::_init_allocators()
void Platform::_switch_to_core_cspace()
{
Cnode_base const initial_cspace(seL4_CapInitThreadCNode, 32);
Cnode_base const initial_cspace(Cap_sel(seL4_CapInitThreadCNode), 32);
/* copy initial selectors to core's CNode */
_core_cnode.copy(initial_cspace, seL4_CapInitThreadTCB);
_core_cnode.copy(initial_cspace, seL4_CapInitThreadPD);
_core_cnode.move(initial_cspace, seL4_CapIRQControl); /* cannot be copied */
_core_cnode.copy(initial_cspace, seL4_CapIOPort);
_core_cnode.copy(initial_cspace, seL4_CapBootInfoFrame);
_core_cnode.copy(initial_cspace, seL4_CapArchBootInfoFrame);
_core_cnode.copy(initial_cspace, seL4_CapInitThreadIPCBuffer);
_core_cnode.copy(initial_cspace, seL4_CapIPI);
_core_cnode.copy(initial_cspace, seL4_CapDomain);
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadTCB));
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadVSpace));
_core_cnode.move(initial_cspace, Cnode_index(seL4_CapIRQControl)); /* cannot be copied */
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapASIDControl));
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadASIDPool));
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapIOPort));
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapBootInfoFrame));
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapInitThreadIPCBuffer));
_core_cnode.copy(initial_cspace, Cnode_index(seL4_CapDomain));
/* replace seL4_CapInitThreadCNode with new top-level CNode */
_core_cnode.copy(initial_cspace, Core_cspace::TOP_CNODE_SEL, seL4_CapInitThreadCNode);
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::TOP_CNODE_SEL),
Cnode_index(seL4_CapInitThreadCNode));
/* copy untyped memory selectors to core's CNode */
seL4_BootInfo const &bi = sel4_boot_info();
/*
* We have to move (not copy) the selectors for the initial untyped ranges
* because some of them are already populated with kernel objects allocated
* via '_initial_untyped_pool'. For such an untyped memory range, the
* attempt to copy its selector would result in the following error:
*
* <<seL4: Error deriving cap for CNode Copy operation.>>
*/
for (unsigned sel = bi.untyped.start; sel < bi.untyped.end; sel++)
_core_cnode.copy(initial_cspace, sel);
_core_cnode.move(initial_cspace, Cnode_index(sel));
for (unsigned sel = bi.deviceUntyped.start; sel < bi.deviceUntyped.end; sel++)
_core_cnode.copy(initial_cspace, sel);
// for (unsigned sel = bi.deviceUntyped.start; sel < bi.deviceUntyped.end; sel++)
// _core_cnode.copy(initial_cspace, sel);
for (unsigned sel = bi.userImageFrames.start; sel < bi.userImageFrames.end; sel++)
_core_cnode.copy(initial_cspace, sel);
_core_cnode.copy(initial_cspace, Cnode_index(sel));
/* copy statically created CNode selectors to core's CNode */
_core_cnode.copy(initial_cspace, Core_cspace::TOP_CNODE_SEL);
_core_cnode.copy(initial_cspace, Core_cspace::CORE_PAD_CNODE_SEL);
_core_cnode.copy(initial_cspace, Core_cspace::CORE_CNODE_SEL);
_core_cnode.copy(initial_cspace, Core_cspace::PHYS_CNODE_SEL);
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::TOP_CNODE_SEL));
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::CORE_PAD_CNODE_SEL));
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::CORE_CNODE_SEL));
_core_cnode.copy(initial_cspace, Cnode_index(Core_cspace::PHYS_CNODE_SEL));
/*
* Construct CNode hierarchy of core's CSpace
*/
/* insert 3rd-level core CNode into 2nd-level core-pad CNode */
_core_pad_cnode.copy(initial_cspace, Core_cspace::CORE_CNODE_SEL, 0);
_core_pad_cnode.copy(initial_cspace, Cnode_index(Core_cspace::CORE_CNODE_SEL),
Cnode_index(0));
/* insert 2nd-level core-pad CNode into 1st-level CNode */
_top_cnode.copy(initial_cspace, Core_cspace::CORE_PAD_CNODE_SEL,
Core_cspace::TOP_CNODE_CORE_IDX);
_top_cnode.copy(initial_cspace, Cnode_index(Core_cspace::CORE_PAD_CNODE_SEL),
Cnode_index(Core_cspace::TOP_CNODE_CORE_IDX));
/* insert 2nd-level phys-mem CNode into 1st-level CNode */
_top_cnode.copy(initial_cspace, Core_cspace::PHYS_CNODE_SEL,
Core_cspace::TOP_CNODE_PHYS_IDX);
_top_cnode.copy(initial_cspace, Cnode_index(Core_cspace::PHYS_CNODE_SEL),
Cnode_index(Core_cspace::TOP_CNODE_PHYS_IDX));
/* insert 2nd-level untyped-pages CNode into 1st-level CNode */
_top_cnode.copy(initial_cspace, Cnode_index(Core_cspace::UNTYPED_CNODE_SEL),
Cnode_index(Core_cspace::TOP_CNODE_UNTYPED_IDX));
/* activate core's CSpace */
{
@ -226,6 +242,12 @@ void Platform::_switch_to_core_cspace()
}
Cap_sel Platform::_init_asid_pool()
{
return Cap_sel(seL4_CapInitThreadASIDPool);
}
void Platform::_init_core_page_table_registry()
{
seL4_BootInfo const &bi = sel4_boot_info();
@ -236,7 +258,7 @@ void Platform::_init_core_page_table_registry()
addr_t virt_addr = (addr_t)(&_prog_img_beg);
for (unsigned sel = bi.userImagePTs.start; sel < bi.userImagePTs.end; sel++) {
_core_page_table_registry.insert_page_table(virt_addr, sel);
_core_page_table_registry.insert_page_table(virt_addr, Cap_sel(sel));
/* one page table has 1024 entries */
virt_addr += 1024*get_page_size();
@ -313,9 +335,10 @@ void Platform::_init_rom_modules()
/*
* Install the module's frame selectors into phys CNode
*/
Cnode_base const initial_cspace(seL4_CapInitThreadCNode, 32);
Cnode_base const initial_cspace(Cap_sel(seL4_CapInitThreadCNode), 32);
for (unsigned i = 0; i < module_num_frames; i++)
_phys_cnode.copy(initial_cspace, module_frame_sel + i, dst_frame + i);
_phys_cnode.copy(initial_cspace, Cnode_index(module_frame_sel + i),
Cnode_index(dst_frame + i));
PLOG("boot module '%s' (%zd bytes)", header->name, header->size);
@ -341,14 +364,13 @@ Platform::Platform()
_init_unused_phys_alloc_done((_init_unused_phys_alloc(), true)),
_vm_base(0x2000), /* 2nd page is used as IPC buffer of main thread */
_vm_size(2*1024*1024*1024UL - _vm_base), /* use the lower 2GiB */
_init_allocators_done((_init_allocators(), true)),
_init_sel4_ipc_buffer_done((init_sel4_ipc_buffer(), true)),
_switch_to_core_cspace_done((_switch_to_core_cspace(), true)),
_core_page_table_registry(*core_mem_alloc()),
_init_core_page_table_registry_done((_init_core_page_table_registry(), true)),
_core_vm_space(seL4_CapInitThreadPD,
Core_cspace::CORE_VM_PAD_CNODE_SEL,
Core_cspace::CORE_VM_CNODE_SEL,
_init_allocators_done((_init_allocators(), true)),
_core_vm_space(Cap_sel(seL4_CapInitThreadPD),
_core_sel_alloc,
_phys_alloc,
_top_cnode,
_core_cnode,
@ -372,35 +394,20 @@ Platform::Platform()
}
unsigned Platform::alloc_core_sel()
{
Lock::Guard guard(_core_sel_alloc_lock);
return _core_sel_alloc.alloc();
}
unsigned Platform::alloc_core_rcv_sel()
{
unsigned rcv_sel = alloc_core_sel();
Cap_sel rcv_sel = _core_sel_alloc.alloc();
seL4_SetCapReceivePath(_core_cnode.sel(), rcv_sel, _core_cnode.size_log2());
seL4_SetCapReceivePath(_core_cnode.sel().value(), rcv_sel.value(),
_core_cnode.size_log2());
return rcv_sel;
return rcv_sel.value();
}
void Platform::reset_sel(unsigned sel)
{
_core_cnode.remove(sel);
}
void Platform::free_core_sel(unsigned sel)
{
Lock::Guard guard(_core_sel_alloc_lock);
_core_sel_alloc.free(sel);
_core_cnode.remove(Cap_sel(sel));
}

View File

@ -96,34 +96,43 @@ int Platform_pd::assign_parent(Native_capability parent)
* INITIAL_SEL_PARENT within the PD's CSpace.
*/
_cspace_cnode.copy(platform_specific()->core_cnode(),
ipc_cap_data.sel,
INITIAL_SEL_PARENT);
Cnode_index(ipc_cap_data.sel),
Cnode_index(INITIAL_SEL_PARENT));
return 0;
}
Untyped_address Platform_pd::_init_page_directory()
addr_t Platform_pd::_init_page_directory()
{
using namespace Kernel_object;
return create<Page_directory>(*platform()->ram_alloc(),
platform_specific()->core_cnode().sel(),
_page_directory_sel);
PDBG("_init_page_directory at sel %lu", _page_directory_sel.value());
addr_t const phys =
create<Page_directory_kobj>(*platform()->ram_alloc(),
platform_specific()->core_cnode().sel(),
_page_directory_sel);
int const ret = seL4_IA32_ASIDPool_Assign(platform_specific()->asid_pool().value(),
_page_directory_sel.value());
if (ret != 0)
PERR("seL4_IA32_ASIDPool_Assign returned %d", ret);
return phys;
}
unsigned Platform_pd::alloc_sel()
Cap_sel Platform_pd::alloc_sel()
{
Lock::Guard guard(_sel_alloc_lock);
return _sel_alloc.alloc();
return Cap_sel(_sel_alloc.alloc());
}
void Platform_pd::free_sel(unsigned sel)
void Platform_pd::free_sel(Cap_sel sel)
{
Lock::Guard guard(_sel_alloc_lock);
_sel_alloc.free(sel);
_sel_alloc.free(sel.value());
}
@ -144,19 +153,17 @@ Platform_pd::Platform_pd(Allocator * md_alloc, char const *,
:
_id(pd_id_alloc().alloc()),
_page_table_registry(*md_alloc),
_vm_pad_cnode_sel(platform_specific()->alloc_core_sel()),
_vm_cnode_sel(platform_specific()->alloc_core_sel()),
_page_directory_sel(platform_specific()->alloc_core_sel()),
_page_directory_sel(platform_specific()->core_sel_alloc().alloc()),
_page_directory(_init_page_directory()),
_vm_space(_page_directory_sel,
_vm_pad_cnode_sel, _vm_cnode_sel,
platform_specific()->core_sel_alloc(),
*platform()->ram_alloc(),
platform_specific()->top_cnode(),
platform_specific()->core_cnode(),
platform_specific()->phys_cnode(),
_id,
_page_table_registry),
_cspace_cnode_sel(platform_specific()->alloc_core_sel()),
_cspace_cnode_sel(platform_specific()->core_sel_alloc().alloc()),
_cspace_cnode(platform_specific()->core_cnode().sel(), _cspace_cnode_sel,
CSPACE_SIZE_LOG2,
*platform()->ram_alloc())
@ -164,7 +171,7 @@ Platform_pd::Platform_pd(Allocator * md_alloc, char const *,
/* install CSpace selector at predefined position in the PD's CSpace */
_cspace_cnode.copy(platform_specific()->core_cnode(),
_cspace_cnode_sel,
INITIAL_SEL_CNODE);
Cnode_index(INITIAL_SEL_CNODE));
}
@ -172,8 +179,4 @@ Platform_pd::~Platform_pd()
{
/* invalidate weak pointers to this object */
Address_space::lock_for_destruction();
platform_specific()->free_core_sel(_vm_cnode_sel);
platform_specific()->free_core_sel(_vm_pad_cnode_sel);
platform_specific()->free_core_sel(_cspace_cnode_sel);
}

View File

@ -79,7 +79,7 @@ void Genode::install_mapping(Mapping const &mapping, unsigned long pager_object_
** Utilities to support the Platform_thread interface **
********************************************************/
static void prepopulate_ipc_buffer(addr_t ipc_buffer_phys, unsigned ep_sel)
static void prepopulate_ipc_buffer(addr_t ipc_buffer_phys, Cap_sel ep_sel)
{
/* IPC buffer is one page */
size_t const page_rounded_size = get_page_size();
@ -97,7 +97,7 @@ static void prepopulate_ipc_buffer(addr_t ipc_buffer_phys, unsigned ep_sel)
/* populate IPC buffer with thread information */
Native_utcb &utcb = *(Native_utcb *)virt_addr;
utcb.ep_sel = ep_sel;
utcb.ep_sel = ep_sel.value();
/* unmap IPC buffer from core */
unmap_local((addr_t)virt_addr, 1);
@ -120,7 +120,7 @@ int Platform_thread::start(void *ip, void *sp, unsigned int cpu_no)
_fault_handler_sel = _pd->alloc_sel();
/* pager endpoint in core */
unsigned const pager_sel = Capability_space::ipc_cap_data(_pager->cap()).sel;
Cap_sel const pager_sel(Capability_space::ipc_cap_data(_pager->cap()).sel);
/* install page-fault handler endpoint selector to the PD's CSpace */
_pd->cspace_cnode().copy(platform_specific()->core_cnode(), pager_sel,
@ -146,9 +146,9 @@ int Platform_thread::start(void *ip, void *sp, unsigned int cpu_no)
seL4_CapData_t const no_cap_data = { { 0 } };
int const ret = seL4_TCB_SetSpace(_info.tcb_sel, _fault_handler_sel,
_pd->cspace_cnode().sel(), guard_cap_data,
_pd->page_directory_sel(), no_cap_data);
int const ret = seL4_TCB_SetSpace(_info.tcb_sel.value(), _fault_handler_sel.value(),
_pd->cspace_cnode().sel().value(), guard_cap_data,
_pd->page_directory_sel().value(), no_cap_data);
ASSERT(ret == 0);
start_sel4_thread(_info.tcb_sel, (addr_t)ip, (addr_t)(sp));
@ -206,7 +206,7 @@ Platform_thread::Platform_thread(size_t, const char *name, unsigned priority,
:
_name(name),
_utcb(utcb),
_pager_obj_sel(platform_specific()->alloc_core_sel())
_pager_obj_sel(platform_specific()->core_sel_alloc().alloc())
{
_info.init(_utcb ? _utcb : INITIAL_IPC_BUFFER_VIRT);
@ -219,7 +219,7 @@ Platform_thread::~Platform_thread()
PDBG("not completely implemented");
platform_thread_registry().remove(*this);
platform_specific()->free_core_sel(_pager_obj_sel);
platform_specific()->core_sel_alloc().free(_pager_obj_sel);
}

View File

@ -18,6 +18,7 @@
#include <ram_session_component.h>
#include <platform.h>
#include <map_local.h>
#include <untyped_memory.h>
using namespace Genode;

View File

@ -41,14 +41,14 @@ void Thread_base::_init_platform_thread(size_t, Type type)
thread_info.ipc_buffer_phys, utcb_virt_addr);
}
_tid.tcb_sel = thread_info.tcb_sel;
_tid.ep_sel = thread_info.ep_sel;
_tid.tcb_sel = thread_info.tcb_sel.value();
_tid.ep_sel = thread_info.ep_sel.value();
Platform &platform = *platform_specific();
seL4_CapData_t no_cap_data = { { 0 } };
int const ret = seL4_TCB_SetSpace(_tid.tcb_sel, 0,
platform.top_cnode().sel(), no_cap_data,
platform.top_cnode().sel().value(), no_cap_data,
seL4_CapInitThreadPD, no_cap_data);
ASSERT(ret == 0);
}
@ -71,7 +71,8 @@ void Thread_base::_thread_start()
void Thread_base::start()
{
start_sel4_thread(_tid.tcb_sel, (addr_t)&_thread_start, (addr_t)stack_top());
start_sel4_thread(Cap_sel(_tid.tcb_sel), (addr_t)&_thread_start,
(addr_t)stack_top());
}

View File

@ -1,13 +0,0 @@
diff --git a/libsel4/include/sel4/macros.h b/libsel4/include/sel4/macros.h
index 99b18e1..0eea3fb 100644
--- a/libsel4/include/sel4/macros.h
+++ b/libsel4/include/sel4/macros.h
@@ -17,7 +17,7 @@
* the same size as an 'int'.
*/
#define SEL4_FORCE_LONG_ENUM(type) \
- _enum_pad_ ## type = (1U << ((sizeof(int)*8) - 1))
+ _enum_pad_ ## type = ((1U << ((sizeof(int)*8) - 1)) - 1)
#ifndef CONST
#define CONST __attribute__((__const__))

View File

@ -1,23 +0,0 @@
/*
* \brief Dummy context_area helper
* \author Norman Feske
* \date 2014-10-14
*/
/*
* Copyright (C) 2014 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#include <ram_session/ram_session.h>
#include <rm_session/rm_session.h>
namespace Genode {
Rm_session *env_context_area_rm_session() { return nullptr; }
Ram_session *env_context_area_ram_session() { return nullptr; }
}

View File

@ -1,462 +0,0 @@
/*
* \brief Test for exercising the seL4 kernel interface
* \author Norman Feske
* \date 2014-10-14
*/
/*
* Copyright (C) 2014 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
/* Genode includes */
#include <base/printf.h>
#include <base/stdint.h>
#include <util/string.h>
/* seL4 includes */
#include <sel4/bootinfo.h>
#include <sel4/interfaces/sel4_client.h>
static seL4_BootInfo const *boot_info()
{
extern Genode::addr_t __initial_bx;
return (seL4_BootInfo const *)__initial_bx;
}
static void dump_untyped_ranges(seL4_BootInfo const &bi,
unsigned start, unsigned size)
{
for (unsigned i = start; i < start + size; i++) {
/* index into 'untypedPaddrList' */
unsigned const k = i - bi.untyped.start;
PINF(" [%02x] [%08lx,%08lx]",
i,
(long)bi.untypedPaddrList[k],
(long)bi.untypedPaddrList[k] + (1UL << bi.untypedSizeBitsList[k]) - 1);
}
}
static void dump_boot_info(seL4_BootInfo const &bi)
{
PINF("--- boot info at %p ---", &bi);
PINF("ipcBuffer: %p", bi.ipcBuffer);
PINF("initThreadCNodeSizeBits: %d", (int)bi.initThreadCNodeSizeBits);
PINF("untyped: [%x,%x)", bi.untyped.start, bi.untyped.end);
dump_untyped_ranges(bi, bi.untyped.start,
bi.untyped.end - bi.untyped.start);
PINF("deviceUntyped: [%x,%x)",
bi.deviceUntyped.start, bi.deviceUntyped.end);
dump_untyped_ranges(bi, bi.deviceUntyped.start,
bi.deviceUntyped.end - bi.deviceUntyped.start);
}
static inline void init_ipc_buffer()
{
asm volatile ("movl %0, %%gs" :: "r"(IPCBUF_GDT_SELECTOR) : "memory");
}
enum { SEL4_TCB_SIZE = 0x1000,
SEL4_EP_SIZE = 16,
SEL4_PAGE_TABLE_SIZE = 1UL << 12,
SEL4_PAGE_SIZE = 1UL << 12 };
/*
* Capability for the second thread's TCB
*/
enum { SECOND_THREAD_CAP = 0x100 };
/*
* Capability for IPC entrypoint, set up by the main thread, used by the second
* thread.
*/
enum { EP_CAP = 0x101 };
/*
* Capability slot used by the second thread to receive a capability via IPC.
*/
enum { RECV_CAP = 0x102 };
/*
* Minted endpoint capability, derived from EP_CAP
*/
enum { EP_MINTED_CAP = 0x103 };
/*
* Capabilities for a custom created page table, a page, and a second cap
* for the same page.
*/
enum { PAGE_TABLE_CAP = 0x104, PAGE_CAP = 0x105, PAGE_CAP_2 = 0x106 };
void second_thread_entry()
{
init_ipc_buffer();
for (;;) {
seL4_SetCapReceivePath(seL4_CapInitThreadCNode, RECV_CAP, 32);
seL4_CNode_Delete(seL4_CapInitThreadCNode, RECV_CAP, 32);
PDBG("call seL4_Wait");
seL4_MessageInfo_t msg_info = seL4_Wait(EP_CAP, nullptr);
PDBG("returned from seL4_Wait, call seL4_Reply");
PDBG("msg_info: got unwrapped %d", seL4_MessageInfo_get_capsUnwrapped(msg_info));
PDBG(" got extra caps %d", seL4_MessageInfo_get_extraCaps(msg_info));
PDBG(" label %d", seL4_MessageInfo_get_label(msg_info));
if (seL4_MessageInfo_get_capsUnwrapped(msg_info)) {
PDBG(" badge %d", seL4_CapData_Badge_get_Badge(seL4_GetBadge(0)));
}
seL4_Reply(msg_info);
PDBG("returned from seL4_Reply");
}
}
/**
* Return cap selector of largest available untyped memory range
*/
static seL4_Untyped const largest_untyped_range(seL4_BootInfo const &bi)
{
using Genode::size_t;
seL4_Untyped largest = 0;
size_t largest_size = 0;
unsigned const idx_start = bi.untyped.start;
unsigned const idx_size = bi.untyped.end - idx_start;
for (unsigned i = idx_start; i < idx_start + idx_size; i++) {
size_t const size = (1UL << bi.untypedSizeBitsList[i - idx_start]) - 1;
if (size <= largest_size)
continue;
largest_size = size;
largest = i;
}
return largest;
}
extern char _bss_start, _bss_end;
int main()
{
seL4_BootInfo const *bi = boot_info();
dump_boot_info(*bi);
PDBG("set_ipc_buffer");
init_ipc_buffer();
PDBG("seL4_SetUserData");
seL4_SetUserData((seL4_Word)bi->ipcBuffer);
/* yse largest untyped memory region for allocating kernel objects */
seL4_Untyped const untyped = largest_untyped_range(*bi);
/* offset to next free position within the untyped memory range */
unsigned long untyped_offset = 0;
/* create second thread */
{
seL4_Untyped const service = untyped;
int const type = seL4_TCBObject;
int const offset = untyped_offset;
int const size_bits = 0;
seL4_CNode const root = seL4_CapInitThreadCNode;
int const node_index = 0;
int const node_depth = 0;
int const node_offset = SECOND_THREAD_CAP;
int const num_objects = 1;
untyped_offset += SEL4_TCB_SIZE;
int const ret = seL4_Untyped_RetypeAtOffset(service,
type,
offset,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
PDBG("seL4_Untyped_RetypeAtOffset (TCB) returned %d", ret);
}
/* create synchronous IPC entrypoint */
{
seL4_Untyped const service = untyped;
int const type = seL4_EndpointObject;
int const offset = untyped_offset;
int const size_bits = 0;
seL4_CNode const root = seL4_CapInitThreadCNode;
int const node_index = 0;
int const node_depth = 0;
int const node_offset = EP_CAP;
int const num_objects = 1;
untyped_offset += SEL4_EP_SIZE;
int const ret = seL4_Untyped_RetypeAtOffset(service,
type,
offset,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
PDBG("seL4_Untyped_RetypeAtOffset (EP) returned %d", ret);
}
/* assign IPC buffer to second thread */
{
static_assert(sizeof(seL4_IPCBuffer) % 512 == 0,
"unexpected seL4_IPCBuffer size");
int const ret = seL4_TCB_SetIPCBuffer(SECOND_THREAD_CAP,
(seL4_Word)(bi->ipcBuffer + 1),
seL4_CapInitThreadIPCBuffer);
PDBG("seL4_TCB_SetIPCBuffer returned %d", ret);
}
/* start second thread */
long stack[0x1000];
{
seL4_UserContext regs;
Genode::memset(&regs, 0, sizeof(regs));
regs.eip = (uint32_t)&second_thread_entry;
regs.esp = (uint32_t)&stack[0] + sizeof(stack);
int const ret = seL4_TCB_WriteRegisters(SECOND_THREAD_CAP, false,
0, 2, &regs);
PDBG("seL4_TCB_WriteRegisters returned %d", ret);
}
{
seL4_CapData_t no_cap_data = { { 0 } };
int const ret = seL4_TCB_SetSpace(SECOND_THREAD_CAP, 0,
seL4_CapInitThreadCNode, no_cap_data,
seL4_CapInitThreadPD, no_cap_data);
PDBG("seL4_TCB_SetSpace returned %d", ret);
}
seL4_TCB_Resume(SECOND_THREAD_CAP);
seL4_TCB_SetPriority(SECOND_THREAD_CAP, 0xff);
PDBG("seL4_Call, delegating a TCB capability");
{
seL4_MessageInfo_t msg_info = seL4_MessageInfo_new(13, 0, 1, 0);
seL4_SetCap(0, SECOND_THREAD_CAP);
seL4_Call(EP_CAP, msg_info);
PDBG("returned from seL4_Call");
}
PDBG("seL4_Call, delegating a TCB capability");
{
seL4_MessageInfo_t msg_info = seL4_MessageInfo_new(13, 0, 1, 0);
seL4_SetCap(0, SECOND_THREAD_CAP);
seL4_Call(EP_CAP, msg_info);
PDBG("returned from seL4_Call");
}
PDBG("seL4_Call, delegating a minted endpoint capability");
{
/* mint EP_CAP into EP_MINTED_CAP */
seL4_CNode const service = seL4_CapInitThreadCNode;
seL4_Word const dest_index = EP_MINTED_CAP;
uint8_t const dest_depth = 32;
seL4_CNode const src_root = seL4_CapInitThreadCNode;
seL4_Word const src_index = EP_CAP;
uint8_t const src_depth = 32;
seL4_CapRights const rights = seL4_Transfer_Mint;
seL4_CapData_t const badge = seL4_CapData_Badge_new(111);
int const ret = seL4_CNode_Mint(service,
dest_index,
dest_depth,
src_root,
src_index,
src_depth,
rights,
badge);
PDBG("seL4_CNode_Mint (EP_MINTED_CAP) returned %d", ret);
seL4_MessageInfo_t msg_info = seL4_MessageInfo_new(13, 0, 3, 0);
/*
* Supply 3 capabilities as arguments. The first and third will be
* unwrapped at the receiver, the second will be delegated.
*/
seL4_SetCap(0, EP_MINTED_CAP);
seL4_SetCap(1, SECOND_THREAD_CAP);
seL4_SetCap(2, EP_MINTED_CAP);
seL4_Call(EP_CAP, msg_info);
PDBG("returned from seL4_Call");
}
/*
* Test the mapping of memory
*/
/* create page table */
{
/* align allocation offset */
untyped_offset = Genode::align_addr(untyped_offset, 12);
seL4_Untyped const service = untyped;
int const type = seL4_IA32_PageTableObject;
int const offset = untyped_offset;
int const size_bits = 0;
seL4_CNode const root = seL4_CapInitThreadCNode;
int const node_index = 0;
int const node_depth = 0;
int const node_offset = PAGE_TABLE_CAP;
int const num_objects = 1;
untyped_offset += SEL4_PAGE_TABLE_SIZE;
int const ret = seL4_Untyped_RetypeAtOffset(service,
type,
offset,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
PDBG("seL4_Untyped_RetypeAtOffset (PAGE_TABLE) returned %d", ret);
}
/* create 4K page */
{
/* align allocation offset */
untyped_offset = Genode::align_addr(untyped_offset, 12);
seL4_Untyped const service = untyped;
int const type = seL4_IA32_4K;
int const offset = untyped_offset;
int const size_bits = 0;
seL4_CNode const root = seL4_CapInitThreadCNode;
int const node_index = 0;
int const node_depth = 0;
int const node_offset = PAGE_CAP;
int const num_objects = 1;
untyped_offset += SEL4_PAGE_SIZE;
int const ret = seL4_Untyped_RetypeAtOffset(service,
type,
offset,
size_bits,
root,
node_index,
node_depth,
node_offset,
num_objects);
PDBG("seL4_Untyped_RetypeAtOffset (PAGE) returned %d", ret);
}
/* add page table into our page directory at address 0x40000000 */
{
seL4_IA32_PageTable const service = PAGE_TABLE_CAP;
seL4_IA32_PageDirectory const pd = seL4_CapInitThreadPD;
seL4_Word const vaddr = 0x40000000;
seL4_IA32_VMAttributes const attr = seL4_IA32_Default_VMAttributes;
int const ret = seL4_IA32_PageTable_Map(service, pd, vaddr, attr);
PDBG("seL4_IA32_PageTable_Map returned %d", ret);
}
/* add page to page table at 0x40001000 */
{
seL4_IA32_Page const service = PAGE_CAP;
seL4_IA32_PageDirectory const pd = seL4_CapInitThreadPD;
seL4_Word const vaddr = 0x40001000;
seL4_CapRights const rights = seL4_AllRights;
seL4_IA32_VMAttributes const attr = seL4_IA32_Default_VMAttributes;
int const ret = seL4_IA32_Page_Map(service, pd, vaddr, rights, attr);
PDBG("seL4_IA32_Page_Map to 0x%lx returned %d", (unsigned long)vaddr, ret);
}
/*
* We cannot use the same PAGE_CAP for the seoncd mapping (see Chapter
* 6.4 of the seL4 manual). So we need to create and use a copy of the
* page cap.
*/
{
seL4_CNode const service = seL4_CapInitThreadCNode;
seL4_Word const dest_index = PAGE_CAP_2;
uint8_t const dest_depth = 32;
seL4_CNode const src_root = seL4_CapInitThreadCNode;
seL4_Word const src_index = PAGE_CAP;
uint8_t const src_depth = 32;
seL4_CapRights const rights = seL4_AllRights;
int const ret = seL4_CNode_Copy(service, dest_index, dest_depth,
src_root, src_index, src_depth, rights);
PDBG("seL4_CNode_Copy returned %d", ret);
}
/* add the same page to page table at 0x40002000 */
{
seL4_IA32_Page const service = PAGE_CAP_2;
seL4_IA32_PageDirectory const pd = seL4_CapInitThreadPD;
seL4_Word const vaddr = 0x40002000;
seL4_CapRights const rights = seL4_AllRights;
seL4_IA32_VMAttributes const attr = seL4_IA32_Default_VMAttributes;
int const ret = seL4_IA32_Page_Map(service, pd, vaddr, rights, attr);
PDBG("seL4_IA32_Page_Map to 0x%lx returned %d", (unsigned long)vaddr, ret);
}
/* write data to the first mapping of the page */
Genode::strncpy((char *)0x40001000, "Data written to 0x40001000", 100);
/* print content of the second mapping */
PLOG("read from 0x40002000: \"%s\"", (char const *)0x40002000);
*(int *)0x1122 = 0;
return 0;
}

View File

@ -1,41 +0,0 @@
/*
* \brief Minimalistic implementation of the Genode::env
* \author Norman Feske
* \date 2014-10-14
*/
/*
* Copyright (C) 2014 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#include <base/env.h>
namespace Genode { struct Mini_env; }
struct Genode::Mini_env : Env
{
Parent *parent() override { return nullptr; }
Ram_session *ram_session() override { return nullptr; }
Ram_session_capability ram_session_cap() override { return Ram_session_capability(); }
Cpu_session *cpu_session() override { return nullptr; }
Cpu_session_capability cpu_session_cap() override { return Cpu_session_capability(); }
Rm_session *rm_session() override { return nullptr; }
Pd_session *pd_session() override { return nullptr; }
Allocator *heap() override { return nullptr; }
void reinit(Native_capability::Dst, long) override { }
void reinit_main_thread(Rm_session_capability &) override { }
};
namespace Genode {
Env *env() {
static Mini_env inst;
return &inst;
}
}

View File

@ -1,42 +0,0 @@
TARGET = test-sel4
SRC_CC = main.cc context_area.cc mini_env.cc thread.cc
LIBS = core_printf syscall
#
# Equivalent to base-common library, excluding some parts that would conflict
# with the minimalistic root-task environment (e.g., thread.cc)
#
LIBS += cxx startup
SRC_CC += ipc/ipc.cc
SRC_CC += avl_tree/avl_tree.cc
SRC_CC += allocator/slab.cc
SRC_CC += allocator/allocator_avl.cc
SRC_CC += heap/heap.cc heap/sliced_heap.cc
SRC_CC += console/console.cc
SRC_CC += child/child.cc
SRC_CC += process/process.cc
SRC_CC += elf/elf_binary.cc
SRC_CC += lock/lock.cc
SRC_CC += signal/signal.cc signal/common.cc
SRC_CC += server/server.cc
SRC_CC += thread/trace.cc
SRC_CC += thread/context_allocator.cc
SRC_CC += env/capability.cc env/capability_space.cc
INC_DIR += $(REP_DIR)/src/base
INC_DIR += $(REP_DIR)/src/base/lock
INC_DIR += $(BASE_DIR)/src/base/lock
INC_DIR += $(BASE_DIR)/src/base/thread
vpath %.cc $(REP_DIR)/src/base
vpath %.cc $(BASE_DIR)/src/base
vpath %.cc $(BASE_DIR)/src
SRC_CC += thread_sel4.cc
vpath thread_sel4.cc $(REP_DIR)/src/base/thread
SRC_CC += thread_bootstrap.cc
vpath thread_bootstrap.cc $(REP_DIR)/src/base/thread

View File

@ -1,62 +0,0 @@
/*
* \brief Implementation of the Thread API
* \author Norman Feske
* \date 2014-10-15
*/
/*
* Copyright (C) 2014 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#include <base/thread.h>
#include <base/env.h>
#include <base/snprintf.h>
#include <util/string.h>
#include <util/misc_math.h>
using namespace Genode;
static Thread_base::Context *main_context()
{
enum { STACK_SIZE = sizeof(long)*4*1024 };
static long buffer[STACK_SIZE/sizeof(long)];
/* the context is located beyond the top of the stack */
addr_t const context_addr = (addr_t)buffer + sizeof(buffer)
- sizeof(Thread_base::Context);
Thread_base::Context *context = (Thread_base::Context *)context_addr;
context->stack_base = (addr_t)buffer;
return context;
}
Thread_base *Thread_base::myself() { return nullptr; }
void Thread_base::name(char *dst, size_t dst_len)
{
strncpy(dst, _context->name, dst_len);
}
Thread_base::Thread_base(size_t, const char *name, size_t stack_size, Type type,
Cpu_session *cpu_session)
:
_cpu_session(cpu_session), _context(main_context())
{
strncpy(_context->name, name, sizeof(_context->name));
_context->thread_base = this;
}
Thread_base::Thread_base(size_t quota, const char *name, size_t stack_size, Type type)
: Thread_base(quota, name, stack_size, type, nullptr) { }
Thread_base::~Thread_base() { }