genode/repos/base-linux/include/linux_cpu_session/client.h

105 lines
3.2 KiB
C
Raw Normal View History

/*
* \brief Client-side CPU session interface
* \author Norman Feske
* \date 2012-08-09
*/
/*
2013-01-10 21:44:47 +01:00
* Copyright (C) 2006-2013 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
#ifndef _INCLUDE__LINUX_CPU_SESSION__CLIENT_H_
#define _INCLUDE__LINUX_CPU_SESSION__CLIENT_H_
#include <linux_cpu_session/linux_cpu_session.h>
#include <base/rpc_client.h>
namespace Genode {
struct Linux_cpu_session_client : Rpc_client<Linux_cpu_session>
{
explicit Linux_cpu_session_client(Capability<Linux_cpu_session> session)
: Rpc_client<Linux_cpu_session>(session) { }
Thread_capability create_thread(size_t weight, Name const &name, addr_t utcb = 0) {
return call<Rpc_create_thread>(weight, name, utcb); }
Ram_dataspace_capability utcb(Thread_capability thread) {
return call<Rpc_utcb>(thread); }
void kill_thread(Thread_capability thread) {
call<Rpc_kill_thread>(thread); }
int set_pager(Thread_capability thread, Pager_capability pager) {
return call<Rpc_set_pager>(thread, pager); }
int start(Thread_capability thread, addr_t ip, addr_t sp) {
return call<Rpc_start>(thread, ip, sp); }
void pause(Thread_capability thread) {
call<Rpc_pause>(thread); }
void resume(Thread_capability thread) {
call<Rpc_resume>(thread); }
void cancel_blocking(Thread_capability thread) {
call<Rpc_cancel_blocking>(thread); }
Thread_state state(Thread_capability thread) {
return call<Rpc_get_state>(thread); }
void state(Thread_capability thread, Thread_state const &state) {
call<Rpc_set_state>(thread, state); }
void exception_handler(Thread_capability thread, Signal_context_capability handler) {
call<Rpc_exception_handler>(thread, handler); }
void single_step(Thread_capability thread, bool enable) {
call<Rpc_single_step>(thread, enable); }
Affinity::Space affinity_space() const {
return call<Rpc_affinity_space>(); }
void affinity(Thread_capability thread, Affinity::Location location) {
call<Rpc_affinity>(thread, location); }
Dataspace_capability trace_control() {
return call<Rpc_trace_control>(); }
unsigned trace_control_index(Thread_capability thread) {
return call<Rpc_trace_control_index>(thread); }
Dataspace_capability trace_buffer(Thread_capability thread) {
return call<Rpc_trace_buffer>(thread); }
Dataspace_capability trace_policy(Thread_capability thread) {
return call<Rpc_trace_policy>(thread); }
thread API & CPU session: accounting of CPU quota In the init configuration one can configure the donation of CPU time via 'resource' tags that have the attribute 'name' set to "CPU" and the attribute 'quantum' set to the percentage of CPU quota that init shall donate. The pattern is the same as when donating RAM quota. ! <start name="test"> ! <resource name="CPU" quantum="75"/> ! </start> This would cause init to try donating 75% of its CPU quota to the child "test". Init and core do not preserve CPU quota for their own requirements by default as it is done with RAM quota. The CPU quota that a process owns can be applied through the thread constructor. The constructor has been enhanced by an argument that indicates the percentage of the programs CPU quota that shall be granted to the new thread. So 'Thread(33, "test")' would cause the backing CPU session to try to grant 33% of the programs CPU quota to the thread "test". By now, the CPU quota of a thread can't be altered after construction. Constructing a thread with CPU quota 0 doesn't mean the thread gets never scheduled but that the thread has no guaranty to receive CPU time. Such threads have to live with excess CPU time. Threads that already existed in the official repositories of Genode were adapted in the way that they receive a quota of 0. This commit also provides a run test 'cpu_quota' in base-hw (the only kernel that applies the CPU-quota scheme currently). The test basically runs three threads with different physical CPU quota. The threads simply count for 30 seconds each and the test then checks wether the counter values relate to the CPU-quota distribution. fix #1275
2014-10-16 11:15:46 +02:00
int ref_account(Cpu_session_capability session) {
return call<Rpc_ref_account>(session); }
int transfer_quota(Cpu_session_capability session, size_t amount) {
return call<Rpc_transfer_quota>(session, amount); }
Quota quota() override { return call<Rpc_quota>(); }
/*****************************
* Linux-specific extension **
*****************************/
void thread_id(Thread_capability thread, int pid, int tid) {
call<Rpc_thread_id>(thread, pid, tid); }
Untyped_capability server_sd(Thread_capability thread) {
return call<Rpc_server_sd>(thread); }
Untyped_capability client_sd(Thread_capability thread) {
return call<Rpc_client_sd>(thread); }
};
}
Linux: move process creation into core Genode used to create new processes by directly forking from the respective Genode parent using the process library. The forking process created a PD session at core merely for propagating the PID of the new process into core (for later destruction). This traditional mechanisms has the following disadvantages: First, the PID reported by the creating process to core cannot easily be validated by core. Therefore core has to trust the PD client to not specify a PID of an existing process, which would happen to be killed once the PD session gets destructed. This problem is documented by issue #318. Second, there is no way for a Genode process to detect the failure of its any grandchildren. The immediate parent of a faulting process could use the SIGCHLD-and-waitpid mechanism to observe its children but this mechanism does not work transitively. By performing the process creation exclusively within core, all Genode processes become immediate child processes of core. Hence, core can respond to failures of any of those processes and reflect such conditions via core's session interfaces. Furthermore, the PID associated to a PD session is locally known within core and cannot be forged anymore. In fact, there is actually no need at all to make processes aware of any PIDs of other processes. Please note that this patch breaks the 'chroot' mechanism that comes in the form of the 'os/src/app/chroot' program. Because all processes are forked from core, a chroot'ed process could sneak outside its chroot environment by just creating a new Genode process. To address this issue, the chroot mechanism must be added to core.
2012-08-15 19:14:05 +02:00
#endif /* _INCLUDE__LINUX_CPU_SESSION__CLIENT_H_ */