genode/repos/base/src/base/server/common.cc
Martin Stein 8f9355b360 thread API & CPU session: accounting of CPU quota
In the init configuration one can configure the donation of CPU time via
'resource' tags that have the attribute 'name' set to "CPU" and the
attribute 'quantum' set to the percentage of CPU quota that init shall
donate. The pattern is the same as when donating RAM quota.

! <start name="test">
!   <resource name="CPU" quantum="75"/>
! </start>

This would cause init to try donating 75% of its CPU quota to the child
"test".  Init and core do not preserve CPU quota for their own
requirements by default as it is done with RAM quota.

The CPU quota that a process owns can be applied through the thread
constructor. The constructor has been enhanced by an argument that
indicates the percentage of the programs CPU quota that shall be granted
to the new thread. So 'Thread(33, "test")' would cause the backing CPU
session to try to grant 33% of the programs CPU quota to the thread
"test". By now, the CPU quota of a thread can't be altered after
construction. Constructing a thread with CPU quota 0 doesn't mean the
thread gets never scheduled but that the thread has no guaranty to receive
CPU time. Such threads have to live with excess CPU time.

Threads that already existed in the official repositories of Genode were
adapted in the way that they receive a quota of 0.

This commit also provides a run test 'cpu_quota' in base-hw (the only
kernel that applies the CPU-quota scheme currently). The test basically
runs three threads with different physical CPU quota. The threads simply
count for 30 seconds each and the test then checks wether the counter
values relate to the CPU-quota distribution.

fix #1275
2014-11-28 12:02:37 +01:00

158 lines
3.6 KiB
C++

/*
* \brief Platform-independent part of server-side RPC framework
* \author Norman Feske
* \date 2006-05-12
*/
/*
* Copyright (C) 2006-2013 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#include <base/rpc_server.h>
#include <base/rpc_client.h>
#include <base/blocking.h>
#include <base/env.h>
using namespace Genode;
void Rpc_entrypoint::_dissolve(Rpc_object_base *obj)
{
/* make sure nobody is able to find this object */
remove_locked(obj);
/*
* The activation may execute a blocking operation in a dispatch function.
* Before resolving the corresponding object, we need to ensure that it is
* no longer used. Therefore, we to need cancel an eventually blocking
* operation and let the activation leave the context of the object.
*/
_leave_server_object(obj);
/* wait until nobody is inside dispatch */
obj->acquire();
_cap_session->free(obj->cap());
/* now the object may be safely destructed */
}
void Rpc_entrypoint::_leave_server_object(Rpc_object_base *obj)
{
Lock::Guard lock_guard(_curr_obj_lock);
if (obj == _curr_obj)
cancel_blocking();
}
void Rpc_entrypoint::_block_until_cap_valid()
{
_cap_valid.lock();
}
Untyped_capability Rpc_entrypoint::reply_dst()
{
return _ipc_server ? _ipc_server->dst() : Untyped_capability();
}
void Rpc_entrypoint::omit_reply()
{
/* set current destination to an invalid capability */
if (_ipc_server) _ipc_server->dst(Untyped_capability());
}
void Rpc_entrypoint::explicit_reply(Untyped_capability reply_cap, int return_value)
{
if (!_ipc_server) return;
/* backup reply capability of current request */
Untyped_capability last_reply_cap = _ipc_server->dst();
/* direct ipc server to the specified reply destination */
_ipc_server->ret(return_value);
_ipc_server->dst(reply_cap);
*_ipc_server << IPC_REPLY;
/* restore reply capability of the original request */
_ipc_server->dst(last_reply_cap);
}
void Rpc_entrypoint::activate()
{
_delay_start.unlock();
}
bool Rpc_entrypoint::is_myself() const
{
return (Thread_base::myself() == this);
}
Rpc_entrypoint::Rpc_entrypoint(Cap_session *cap_session, size_t stack_size,
char const *name, bool start_on_construction,
Affinity::Location location)
:
Thread_base(0, name, stack_size),
_cap(Untyped_capability()),
_curr_obj(0), _cap_valid(Lock::LOCKED), _delay_start(Lock::LOCKED),
_delay_exit(Lock::LOCKED),
_cap_session(cap_session)
{
/* set CPU affinity, if specified */
if (location.valid())
env()->cpu_session()->affinity(Thread_base::cap(), location);
Thread_base::start();
_block_until_cap_valid();
if (start_on_construction)
activate();
_exit_cap = manage(&_exit_handler);
}
Rpc_entrypoint::~Rpc_entrypoint()
{
typedef Object_pool<Rpc_object_base> Pool;
/*
* We have to make sure the server loop is running which is only the case
* if the Rpc_entrypoint was actived before we execute the RPC call.
*/
_delay_start.unlock();
/* leave server loop */
_exit_cap.call<Exit::Rpc_exit>();
dissolve(&_exit_handler);
if (Pool::first()) {
PWRN("Object pool not empty in %s", __func__);
/* dissolve all objects - objects are not destroyed! */
while (Rpc_object_base *obj = Pool::first())
_dissolve(obj);
}
/*
* Now that we finished the 'dissolve' steps above (which need a working
* 'Ipc_server' in the context of the entrypoint thread), we can allow the
* entrypoint thread to leave the scope. Thereby, the 'Ipc_server' object
* will get destructed.
*/
_delay_exit.unlock();
join();
}