genode/repos/base-sel4/src/core/include/page_table_registry.h

291 lines
7.6 KiB
C
Raw Normal View History

/*
* \brief Associate page-table and frame selectors with virtual addresses
* \author Norman Feske
* \date 2015-05-04
*/
/*
* Copyright (C) 2015-2017 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__INCLUDE__PAGE_TABLE_REGISTRY_H_
#define _CORE__INCLUDE__PAGE_TABLE_REGISTRY_H_
/* Genode includes */
#include <base/exception.h>
2017-06-22 18:47:02 +02:00
#include <base/heap.h>
#include <base/log.h>
2017-06-22 18:47:02 +02:00
#include <base/tslab.h>
#include <util/avl_tree.h>
/* core includes */
#include <util.h>
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
#include <cap_sel_alloc.h>
namespace Genode { class Page_table_registry; }
class Genode::Page_table_registry
{
public:
class Mapping_cache_full : Exception { };
private:
2017-06-22 18:47:02 +02:00
enum Level { FRAME, PAGE_TABLE, LEVEL2, LEVEL3 };
2017-06-22 18:47:02 +02:00
class Frame : public Avl_node<Frame>
{
private:
2017-06-22 18:47:02 +02:00
addr_t const _vaddr;
Cap_sel const _sel;
2017-06-22 18:47:02 +02:00
Frame *_lookup(addr_t vaddr)
{
2017-06-22 18:47:02 +02:00
if (vaddr == _vaddr) return this;
Frame *e = Avl_node<Frame>::child(vaddr > _vaddr);
return e ? e->_lookup(vaddr) : 0;
}
2017-06-22 18:47:02 +02:00
static addr_t _base(addr_t const vaddr, unsigned const log2base)
{
2017-06-22 18:47:02 +02:00
addr_t const size = 1UL << log2base;
return vaddr & ~(size - 1);
}
public:
2017-06-22 18:47:02 +02:00
Frame(addr_t const vaddr, Cap_sel const sel, unsigned log2base)
:
_vaddr(_base(vaddr, log2base)), _sel(sel)
{ }
2015-05-07 15:47:15 +02:00
Follow practices suggested by "Effective C++" The patch adjust the code of the base, base-<kernel>, and os repository. To adapt existing components to fix violations of the best practices suggested by "Effective C++" as reported by the -Weffc++ compiler argument. The changes follow the patterns outlined below: * A class with virtual functions can no longer publicly inherit base classed without a vtable. The inherited object may either be moved to a member variable, or inherited privately. The latter would be used for classes that inherit 'List::Element' or 'Avl_node'. In order to enable the 'List' and 'Avl_tree' to access the meta data, the 'List' must become a friend. * Instead of adding a virtual destructor to abstract base classes, we inherit the new 'Interface' class, which contains a virtual destructor. This way, single-line abstract base classes can stay as compact as they are now. The 'Interface' utility resides in base/include/util/interface.h. * With the new warnings enabled, all member variables must be explicitly initialized. Basic types may be initialized with '='. All other types are initialized with braces '{ ... }' or as class initializers. If basic types and non-basic types appear in a row, it is nice to only use the brace syntax (also for basic types) and align the braces. * If a class contains pointers as members, it must now also provide a copy constructor and assignment operator. In the most cases, one would make them private, effectively disallowing the objects to be copied. Unfortunately, this warning cannot be fixed be inheriting our existing 'Noncopyable' class (the compiler fails to detect that the inheriting class cannot be copied and still gives the error). For now, we have to manually add declarations for both the copy constructor and assignment operator as private class members. Those declarations should be prepended with a comment like this: /* * Noncopyable */ Thread(Thread const &); Thread &operator = (Thread const &); In the future, we should revisit these places and try to replace the pointers with references. In the presence of at least one reference member, the compiler would no longer implicitly generate a copy constructor. So we could remove the manual declaration. Issue #465
2017-12-21 15:42:15 +01:00
Cap_sel sel() const { return _sel; }
addr_t vaddr() const { return _vaddr; }
2017-06-22 18:47:02 +02:00
static Frame * lookup(Avl_tree<Frame> &tree,
addr_t const vaddr,
unsigned const log2base)
2015-05-07 15:47:15 +02:00
{
2017-06-22 18:47:02 +02:00
Frame * element = tree.first();
if (!element)
return nullptr;
2015-05-07 15:47:15 +02:00
2017-06-22 18:47:02 +02:00
addr_t const align_addr = _base(vaddr, log2base);
return element->_lookup(align_addr);
}
2015-05-07 15:47:15 +02:00
2017-06-22 18:47:02 +02:00
bool higher(Frame const *other) const {
return other->_vaddr > _vaddr; }
};
2017-06-22 18:47:02 +02:00
class Table : public Avl_node<Table>
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
{
private:
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
2017-06-22 18:47:02 +02:00
addr_t const _vaddr;
addr_t const _paddr;
Cap_sel const _sel;
2017-06-22 18:47:02 +02:00
Table *_lookup(addr_t vaddr)
{
2017-06-22 18:47:02 +02:00
if (vaddr == _vaddr) return this;
2017-06-22 18:47:02 +02:00
Table *e = Avl_node<Table>::child(vaddr > _vaddr);
2017-06-22 18:47:02 +02:00
return e ? e->_lookup(vaddr) : 0;
}
2017-06-22 18:47:02 +02:00
static addr_t _base(addr_t const vaddr, unsigned const log2base)
{
2017-06-22 18:47:02 +02:00
addr_t const size = 1UL << log2base;
return vaddr & ~(size - 1);
}
2017-06-22 18:47:02 +02:00
public:
2017-06-22 18:47:02 +02:00
Table(addr_t const vaddr, addr_t const paddr,
Cap_sel const sel, unsigned log2base)
:
_vaddr(_base(vaddr, log2base)), _paddr(paddr), _sel(sel)
{ }
Follow practices suggested by "Effective C++" The patch adjust the code of the base, base-<kernel>, and os repository. To adapt existing components to fix violations of the best practices suggested by "Effective C++" as reported by the -Weffc++ compiler argument. The changes follow the patterns outlined below: * A class with virtual functions can no longer publicly inherit base classed without a vtable. The inherited object may either be moved to a member variable, or inherited privately. The latter would be used for classes that inherit 'List::Element' or 'Avl_node'. In order to enable the 'List' and 'Avl_tree' to access the meta data, the 'List' must become a friend. * Instead of adding a virtual destructor to abstract base classes, we inherit the new 'Interface' class, which contains a virtual destructor. This way, single-line abstract base classes can stay as compact as they are now. The 'Interface' utility resides in base/include/util/interface.h. * With the new warnings enabled, all member variables must be explicitly initialized. Basic types may be initialized with '='. All other types are initialized with braces '{ ... }' or as class initializers. If basic types and non-basic types appear in a row, it is nice to only use the brace syntax (also for basic types) and align the braces. * If a class contains pointers as members, it must now also provide a copy constructor and assignment operator. In the most cases, one would make them private, effectively disallowing the objects to be copied. Unfortunately, this warning cannot be fixed be inheriting our existing 'Noncopyable' class (the compiler fails to detect that the inheriting class cannot be copied and still gives the error). For now, we have to manually add declarations for both the copy constructor and assignment operator as private class members. Those declarations should be prepended with a comment like this: /* * Noncopyable */ Thread(Thread const &); Thread &operator = (Thread const &); In the future, we should revisit these places and try to replace the pointers with references. In the presence of at least one reference member, the compiler would no longer implicitly generate a copy constructor. So we could remove the manual declaration. Issue #465
2017-12-21 15:42:15 +01:00
Cap_sel sel() const { return _sel; }
addr_t vaddr() const { return _vaddr; }
addr_t paddr() const { return _paddr; }
2017-06-22 18:47:02 +02:00
static Table * lookup(Avl_tree<Table> &tree,
addr_t const vaddr,
unsigned const log2base)
{
2017-06-22 18:47:02 +02:00
Table * element = tree.first();
if (!element)
return nullptr;
addr_t const align_addr = _base(vaddr, log2base);
return element->_lookup(align_addr);
}
2017-06-22 18:47:02 +02:00
bool higher(Table const *other) const {
return other->_vaddr > _vaddr; }
};
2017-06-22 18:47:02 +02:00
enum {
LEVEL_0 = 12, /* 4K Page */
};
sel4: update to version 2.1 This patch updates seL4 from the experimental branch of one year ago to the master branch of version 2.1. The transition has the following implications. In contrast to the experimental branch, the master branch has no way to manually define the allocation of kernel objects within untyped memory ranges. Instead, the kernel maintains a built-in allocation policy. This policy rules out the deallocation of once-used parts of untyped memory. The only way to reuse memory is to revoke the entire untyped memory range. Consequently, we cannot share a large untyped memory range for kernel objects of different protection domains. In order to reuse memory at a reasonably fine granularity, we need to split the initial untyped memory ranges into small chunks that can be individually revoked. Those chunks are called "untyped pages". An untyped page is a 4 KiB untyped memory region. The bootstrapping of core has to employ a two-stage allocation approach now. For creating the initial kernel objects for core, which remain static during the entire lifetime of the system, kernel objects are created directly out of the initial untyped memory regions as reported by the kernel. The so-called "initial untyped pool" keeps track of the consumption of those untyped memory ranges by mimicking the kernel's internal allocation policy. Kernel objects created this way can be of any size. For example the phys CNode, which is used to store page-frame capabilities is 16 MiB in size. Also, core's CSpace uses a relatively large CNode. After the initial setup phase, all remaining untyped memory is turned into untyped pages. From this point on, new created kernel objects cannot exceed 4 KiB in size because one kernel object cannot span multiple untyped memory regions. The capability selectors for untyped pages are organized similarly to those of page-frame capabilities. There is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned according to the maximum amount of physical memory (1M entries, each entry representing 4 KiB). The CNode is organized such that an index into the CNode directly corresponds to the physical frame number of the underlying memory. This way, we can easily determine a untyped page selector for any physical addresses, i.e., for revoking the kernel objects allocated at a specific physical page. The downside is the need for another 16 MiB chunk of meta data. Also, we need to keep in mind that this approach won't scale to 64-bit systems. We will eventually need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode hierarchies to model a sparsely populated CNode. The size constrain of kernel objects has the immediate implication that the VM CSpaces of protection domains must be organized via several levels of CNodes. I.e., as the top-level CNode of core has a size of 2^12, the remaining 20 PD-specific CSpace address bits are organized as a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several 4th-level 2^8 leaf CNodes. The latter contain the actual selectors for the page tables and page-table entries of the respective PD. As another slight difference from the experimental branch, the master branch requires the explicit assignment of page directories to an ASID pool. Besides the adjustment to the new seL4 version, the patch introduces a dedicated type for capability selectors. Previously, we just used to represent them as unsigned integer values, which became increasingly confusing. The new type 'Cap_sel' is a PD-local capability selector. The type 'Cnode_index' is an index into a CNode (which is not generally not the entire CSpace of the PD). Fixes #1887
2016-02-03 14:50:44 +01:00
2017-06-22 18:47:02 +02:00
static constexpr size_t SLAB_BLOCK_SIZE = get_page_size() - Sliced_heap::meta_data_size();
Tslab<Frame, SLAB_BLOCK_SIZE> _alloc_frames;
uint8_t _initial_sb_frame[SLAB_BLOCK_SIZE];
2017-06-22 18:47:02 +02:00
Tslab<Table, SLAB_BLOCK_SIZE> _alloc_high;
uint8_t _initial_sb_high[SLAB_BLOCK_SIZE];
Follow practices suggested by "Effective C++" The patch adjust the code of the base, base-<kernel>, and os repository. To adapt existing components to fix violations of the best practices suggested by "Effective C++" as reported by the -Weffc++ compiler argument. The changes follow the patterns outlined below: * A class with virtual functions can no longer publicly inherit base classed without a vtable. The inherited object may either be moved to a member variable, or inherited privately. The latter would be used for classes that inherit 'List::Element' or 'Avl_node'. In order to enable the 'List' and 'Avl_tree' to access the meta data, the 'List' must become a friend. * Instead of adding a virtual destructor to abstract base classes, we inherit the new 'Interface' class, which contains a virtual destructor. This way, single-line abstract base classes can stay as compact as they are now. The 'Interface' utility resides in base/include/util/interface.h. * With the new warnings enabled, all member variables must be explicitly initialized. Basic types may be initialized with '='. All other types are initialized with braces '{ ... }' or as class initializers. If basic types and non-basic types appear in a row, it is nice to only use the brace syntax (also for basic types) and align the braces. * If a class contains pointers as members, it must now also provide a copy constructor and assignment operator. In the most cases, one would make them private, effectively disallowing the objects to be copied. Unfortunately, this warning cannot be fixed be inheriting our existing 'Noncopyable' class (the compiler fails to detect that the inheriting class cannot be copied and still gives the error). For now, we have to manually add declarations for both the copy constructor and assignment operator as private class members. Those declarations should be prepended with a comment like this: /* * Noncopyable */ Thread(Thread const &); Thread &operator = (Thread const &); In the future, we should revisit these places and try to replace the pointers with references. In the presence of at least one reference member, the compiler would no longer implicitly generate a copy constructor. So we could remove the manual declaration. Issue #465
2017-12-21 15:42:15 +01:00
Avl_tree<Frame> _frames { };
Avl_tree<Table> _level1 { };
Avl_tree<Table> _level2 { };
Avl_tree<Table> _level3 { };
2017-06-22 18:47:02 +02:00
void _insert(addr_t const vaddr, Cap_sel const sel, Level const level,
addr_t const paddr, unsigned const level_log2_size)
{
2017-06-22 18:47:02 +02:00
try {
switch (level) {
case FRAME:
_frames.insert(new (_alloc_frames) Frame(vaddr, sel,
level_log2_size));
break;
case PAGE_TABLE:
_level1.insert(new (_alloc_high) Table(vaddr, paddr, sel,
level_log2_size));
break;
case LEVEL2:
_level2.insert(new (_alloc_high) Table(vaddr, paddr, sel,
level_log2_size));
break;
case LEVEL3:
_level3.insert(new (_alloc_high) Table(vaddr, paddr, sel,
level_log2_size));
break;
}
} catch (Genode::Allocator::Out_of_memory) {
throw Mapping_cache_full();
} catch (Genode::Out_of_caps) {
throw Mapping_cache_full();
}
}
2017-06-22 18:47:02 +02:00
template <typename FN, typename T>
void _flush_high(FN const &fn, Avl_tree<T> &tree, Allocator &alloc)
{
2017-06-22 18:47:02 +02:00
for (T *element; (element = tree.first());) {
fn(element->sel(), element->paddr());
tree.remove(element);
destroy(alloc, element);
}
}
public:
/**
* Constructor
*
* \param md_alloc backing store allocator for metadata
*/
2017-06-22 18:47:02 +02:00
Page_table_registry(Allocator &md_alloc)
:
_alloc_frames(md_alloc, _initial_sb_frame),
_alloc_high(md_alloc, _initial_sb_high)
{ }
~Page_table_registry()
{
2017-06-22 18:47:02 +02:00
if (_frames.first() || _level1.first() || _level2.first() ||
_level3.first())
error("still entries in page table registry in destruction");
}
bool page_frame_at(addr_t const vaddr) {
return Frame::lookup(_frames, vaddr, LEVEL_0); }
2017-06-22 18:47:02 +02:00
bool page_table_at(addr_t const vaddr, addr_t const level_log2) {
return Table::lookup(_level1, vaddr, level_log2); }
bool page_directory_at(addr_t const vaddr, addr_t const level_log2) {
return Table::lookup(_level2, vaddr, level_log2); }
bool page_level3_at(addr_t const vaddr, addr_t const level_log2) {
return Table::lookup(_level3, vaddr, level_log2); }
void insert_page_frame(addr_t const vaddr, Cap_sel const sel) {
_insert(vaddr, sel, Level::FRAME, 0, LEVEL_0); }
void insert_page_table(addr_t const vaddr, Cap_sel const sel,
addr_t const paddr, addr_t const level_log2) {
_insert(vaddr, sel, Level::PAGE_TABLE, paddr, level_log2); }
void insert_page_directory(addr_t const vaddr, Cap_sel const sel,
addr_t const paddr, addr_t const level_log2) {
_insert(vaddr, sel, Level::LEVEL2, paddr, level_log2); }
void insert_page_level3(addr_t const vaddr, Cap_sel const sel,
addr_t const paddr, addr_t const level_log2) {
_insert(vaddr, sel, Level::LEVEL3, paddr, level_log2); }
/**
2017-06-22 18:47:02 +02:00
* Apply functor 'fn' to selector of specified virtual address and
* flush the page frame from the this cache.
*
2017-06-22 18:47:02 +02:00
* \param vaddr virtual address
2015-05-07 15:47:15 +02:00
*
* The functor is called with the selector of the page table entry
* (the copy of the phys frame selector) as argument.
*/
template <typename FN>
2017-06-22 18:47:02 +02:00
void flush_page(addr_t vaddr, FN const &fn)
2015-05-07 15:47:15 +02:00
{
2017-06-22 18:47:02 +02:00
Frame * frame = Frame::lookup(_frames, vaddr, LEVEL_0);
if (!frame)
return;
2015-05-07 15:47:15 +02:00
2017-06-22 18:47:02 +02:00
fn(frame->sel(), frame->vaddr());
_frames.remove(frame);
destroy(_alloc_frames, frame);
2015-05-07 15:47:15 +02:00
}
template <typename FN>
2017-06-22 18:47:02 +02:00
void flush_pages(FN const &fn)
{
2017-06-22 18:47:02 +02:00
Avl_tree<Frame> tmp;
2017-06-22 18:47:02 +02:00
for (Frame *frame; (frame = _frames.first());) {
2017-06-22 18:47:02 +02:00
if (fn(frame->sel(), frame->vaddr())) {
_frames.remove(frame);
destroy(_alloc_frames, frame);
} else {
_frames.remove(frame);
tmp.insert(frame);
}
}
2017-06-22 18:47:02 +02:00
for (Frame *frame; (frame = tmp.first());) {
tmp.remove(frame);
_frames.insert(frame);
}
}
2017-06-22 18:47:02 +02:00
template <typename PG, typename LV>
void flush_all(PG const &pages, LV const &level)
{
flush_pages(pages);
_flush_high(level, _level1, _alloc_high);
_flush_high(level, _level2, _alloc_high);
_flush_high(level, _level3, _alloc_high);
}
};
#endif /* _CORE__INCLUDE__PAGE_TABLE_REGISTRY_H_ */