genode/repos/base/src/lib/base/sliced_heap.cc
Norman Feske eba9c15746 Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:

* A class with virtual functions can no longer publicly inherit base
  classed without a vtable. The inherited object may either be moved
  to a member variable, or inherited privately. The latter would be
  used for classes that inherit 'List::Element' or 'Avl_node'. In order
  to enable the 'List' and 'Avl_tree' to access the meta data, the
  'List' must become a friend.

* Instead of adding a virtual destructor to abstract base classes,
  we inherit the new 'Interface' class, which contains a virtual
  destructor. This way, single-line abstract base classes can stay
  as compact as they are now. The 'Interface' utility resides in
  base/include/util/interface.h.

* With the new warnings enabled, all member variables must be explicitly
  initialized. Basic types may be initialized with '='. All other types
  are initialized with braces '{ ... }' or as class initializers. If
  basic types and non-basic types appear in a row, it is nice to only
  use the brace syntax (also for basic types) and align the braces.

* If a class contains pointers as members, it must now also provide a
  copy constructor and assignment operator. In the most cases, one
  would make them private, effectively disallowing the objects to be
  copied. Unfortunately, this warning cannot be fixed be inheriting
  our existing 'Noncopyable' class (the compiler fails to detect that
  the inheriting class cannot be copied and still gives the error).
  For now, we have to manually add declarations for both the copy
  constructor and assignment operator as private class members. Those
  declarations should be prepended with a comment like this:

        /*
         * Noncopyable
         */
        Thread(Thread const &);
        Thread &operator = (Thread const &);

  In the future, we should revisit these places and try to replace
  the pointers with references. In the presence of at least one
  reference member, the compiler would no longer implicitly generate
  a copy constructor. So we could remove the manual declaration.

Issue #465
2018-01-17 12:14:35 +01:00

119 lines
2.6 KiB
C++

/*
* \brief Heap that stores each block at a separate dataspace
* \author Norman Feske
* \date 2006-08-16
*/
/*
* Copyright (C) 2006-2017 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#include <util/construct_at.h>
#include <base/heap.h>
#include <base/log.h>
using namespace Genode;
Sliced_heap::Sliced_heap(Ram_allocator &ram_alloc, Region_map &region_map)
:
_ram_alloc(ram_alloc), _region_map(region_map)
{ }
Sliced_heap::~Sliced_heap()
{
for (Block *b; (b = _blocks.first()); ) {
/*
* Compute pointer to payload, which follows the meta-data header.
* Note the pointer arithmetics. By adding 1 to 'b', we end up with
* 'payload' pointing to the data portion of the block.
*/
void * const payload = b + 1;
free(payload, b->size);
}
}
bool Sliced_heap::alloc(size_t size, void **out_addr)
{
/* allocation includes space for block meta data and is page-aligned */
size = align_addr(size + sizeof(Block), 12);
Ram_dataspace_capability ds_cap;
Block *block = nullptr;
try {
ds_cap = _ram_alloc.alloc(size);
block = _region_map.attach(ds_cap);
}
catch (Region_map::Region_conflict) {
error("sliced_heap: region conflict while attaching dataspace");
_ram_alloc.free(ds_cap);
return false;
}
catch (Region_map::Invalid_dataspace) {
error("sliced_heap: attempt to attach invalid dataspace");
_ram_alloc.free(ds_cap);
return false;
}
catch (Out_of_ram) {
error("could not allocate dataspace with size ", size);
return false;
}
/* serialize access to block list */
Lock::Guard lock_guard(_lock);
construct_at<Block>(block, ds_cap, size);
_consumed += size;
_blocks.insert(block);
/* skip meta data prepended to the payload portion of the block */
*out_addr = block + 1;
return true;
}
void Sliced_heap::free(void *addr, size_t)
{
Ram_dataspace_capability ds_cap;
void *local_addr = nullptr;
{
/* serialize access to block list */
Lock::Guard lock_guard(_lock);
/*
* The 'addr' argument points to the payload. We use pointer
* arithmetics to determine the pointer to the block's meta data that
* is prepended to the payload.
*/
Block * const block = reinterpret_cast<Block *>(addr) - 1;
_blocks.remove(block);
_consumed -= block->size;
ds_cap = block->ds;
local_addr = block;
/*
* Call destructor to properly destruct the dataspace capability
* member of the 'Block'.
*/
block->~Block();
}
_region_map.detach(local_addr);
_ram_alloc.free(ds_cap);
}
size_t Sliced_heap::overhead(size_t size) const
{
return align_addr(size + sizeof(Block), 12) - size;
}