hw: clean up scheduling-readiness syscalls

This cleans up the syscalls that are mainly used to control the
scheduling readiness of a thread. The different use cases and
requirements were somehow mixed together in the previous interface. The
new syscall set is:

1) pause_thread and resume_thread

They don't affect the state of the thread (IPC, signalling, etc.) but
merely decide wether the thread is allowed for scheduling or not, the
so-called pause state. The pause state is orthogonal to the thread state
and masks it when it comes to scheduling. In contrast to the stopped
state, which is described in "stop_thread and restart_thread", the
thread state and the UTCB content of a thread may change while in the
paused state. However, the register state of a thread doesn't change
while paused. The "pause" and "resume" syscalls are both core-restricted
and may target any thread. They are used as back end for the CPU session
calls "pause" and "resume". The "pause/resume" feature is made for
applications like the GDB monitor that transparently want to stop and
continue the execution of a thread no matter what state the thread is
in.

2) stop_thread and restart_thread

The stop syscall can only be used on a thread in the non-blocking
("active") thread state. The thread then switches to the "stopped"
thread state in wich it explicitely waits for a restart. The restart
syscall can only be used on a thread in the "stopped" or the "active"
thread state. The thread then switches back to the "active" thread state
and the syscall returns whether the thread was stopped. Both syscalls
are not core-restricted. "Stop" always targets the calling thread while
"restart" may target any thread in the same PD as the caller. Thread
state and UTCB content of a thread don't change while in the stopped
state. The "stop/restart" feature is used when an active thread wants to
wait for an event that is not known to the kernel. Actually the syscalls
are used when waiting for locks and on thread exit.

3) cancel_thread_blocking

Does cleanly cancel a cancelable blocking thread state (IPC, signalling,
stopped). The thread whose blocking was cancelled goes back to the
"active" thread state. It may receive a syscall return value that
reflects the cancellation. This syscall doesn't affect the pause state
of the thread which means that it may still not get scheduled. The
syscall is core-restricted and may target any thread.

4) yield_thread

Does its best that a thread is scheduled as few as possible in the
current scheduling super-period without touching the thread or pause
state. In the next superperiod, however, the thread is scheduled
"normal" again. The syscall is not core-restricted and always targets
the caller.

Fixes #2104
This commit is contained in:
Martin Stein 2016-09-15 17:23:06 +02:00 committed by Norman Feske
parent ccffbb0dfc
commit 71d30297ff
13 changed files with 196 additions and 125 deletions

View File

@ -23,8 +23,8 @@ namespace Kernel
/**
* Kernel names of the kernel calls
*/
constexpr Call_arg call_id_pause_current_thread() { return 0; }
constexpr Call_arg call_id_resume_local_thread() { return 1; }
constexpr Call_arg call_id_stop_thread() { return 0; }
constexpr Call_arg call_id_restart_thread() { return 1; }
constexpr Call_arg call_id_yield_thread() { return 2; }
constexpr Call_arg call_id_send_request_msg() { return 3; }
constexpr Call_arg call_id_send_reply_msg() { return 4; }
@ -118,38 +118,56 @@ namespace Kernel
/**
* Pause execution of calling thread
* Wait for a user event signaled by a 'restart_thread' syscall
*
* The stop syscall always targets the calling thread that, therefore must
* be in the 'active' thread state. The thread then switches to the
* 'stopped' thread state in wich it waits for a restart. The restart
* syscall can only be used on a thread in the 'stopped' or the 'active'
* thread state. The thread then switches back to the 'active' thread
* state and the syscall returns whether the thread was stopped. Both
* syscalls are not core-restricted. In contrast to the 'stop' syscall,
* 'restart' may target any thread in the same PD as the caller. Thread
* state and UTCB content of a thread don't change while in the 'stopped'
* state. The 'stop/restart' feature is used when an active thread wants
* to wait for an event that is not known to the kernel. Actually, the
* syscalls are used when waiting for locks and when doing infinite
* waiting on thread exit.
*/
inline void pause_current_thread()
inline void stop_thread()
{
call(call_id_pause_current_thread());
call(call_id_stop_thread());
}
/**
* Cancel blocking of a thread of the current domain if possible
* End blocking of a stopped thread
*
* \param thread_id capability id of the targeted thread
*
* \return wether thread was in a cancelable blocking beforehand
* \return wether the thread was stopped beforehand
*
* For details see the 'stop_thread' syscall.
*/
inline bool resume_local_thread(capid_t const thread_id)
inline bool restart_thread(capid_t const thread_id)
{
return call(call_id_resume_local_thread(), thread_id);
return call(call_id_restart_thread(), thread_id);
}
/**
* Let the current thread give up its remaining timeslice
* Yield the callers remaining CPU time for this super period
*
* \param thread_id capability id of the benefited thread
*
* If thread_id is valid the call will resume the targeted thread
* additionally.
* Does its best that the caller is scheduled as few as possible in the
* current scheduling super-period without touching the thread or pause
* state of the thread. In the next superperiod, however, the thread is
* scheduled 'normal' again. The syscall is not core-restricted and always
* targets the caller. It is actually used in locks to help another thread
* reach a desired point in execution by releasing pressure from the CPU.
*/
inline void yield_thread(capid_t const thread_id)
inline void yield_thread()
{
call(call_id_yield_thread(), thread_id);
call(call_id_yield_thread());
}
/**

View File

@ -58,6 +58,7 @@ namespace Kernel
constexpr Call_arg call_id_ack_irq() { return 120; }
constexpr Call_arg call_id_new_obj() { return 121; }
constexpr Call_arg call_id_delete_obj() { return 122; }
constexpr Call_arg call_id_cancel_thread_blocking() { return 123; }
/**
* Update locally effective domain configuration to in-memory state
@ -87,9 +88,23 @@ namespace Kernel
/**
* Pause execution of a specific thread
* Pause execution of a thread until 'resume_thread' is called on it
*
* \param thread pointer to thread kernel object
*
* This doesn't affect the state of the thread (IPC, signalling, etc.) but
* merely wether the thread is allowed for scheduling or not. The pause
* state simply masks the thread state when it comes to scheduling. In
* contrast to the 'stopped' thread state, which is described in the
* documentation of the 'stop_thread/resume_thread' syscalls, the pause
* state doesn't freeze the thread state and the UTCB content of a thread.
* However, the register state of a thread doesn't change while paused.
* The 'pause' and 'resume' syscalls are both core-restricted and may
* target any thread. They are used as back end for the CPU session calls
* 'pause' and 'resume'. The 'pause/resume' feature is made for
* applications like the GDB monitor that transparently want to stop and
* continue the execution of a thread no matter what state the thread is
* in.
*/
inline void pause_thread(Thread * const thread)
{
@ -97,6 +112,17 @@ namespace Kernel
}
/**
* End blocking of a paused thread
*
* \param thread pointer to thread kernel object
*/
inline void resume_thread(Thread * const thread)
{
call(call_id_resume_thread(), (Call_arg)thread);
}
/**
* Start execution of a thread
*
@ -117,15 +143,23 @@ namespace Kernel
/**
* Cancel blocking of a thread if possible
* Cancel blocking of a thread if it is in a cancelable blocking state
*
* \param thread pointer to thread kernel object
*
* \return wether thread was in a cancelable blocking beforehand
* Does cleanly cancel a cancelable blocking thread state (IPC, signalling,
* stopped). The thread whose blocking was cancelled goes back to the
* 'active' thread state. If needed, it receives a syscall return value
* that reflects the cancellation. This syscall doesn't affect the pause
* state of the thread (see the 'pause_thread' syscall) which means that
* the thread may still be not allowed for scheduling. The syscall is
* core-restricted and may target any thread. It is actually used to
* limit the time a parent waits for a server when closing a session
* of one of its children.
*/
inline bool resume_thread(Thread * const thread)
inline void cancel_thread_blocking(Thread * const thread)
{
return call(call_id_resume_thread(), (Call_arg)thread);
call(call_id_cancel_thread_blocking(), (Call_arg)thread);
}

View File

@ -86,15 +86,17 @@ class Kernel::Thread
private:
enum { START_VERBOSE = 0 };
enum State
{
ACTIVE = 1,
AWAITS_START = 2,
AWAITS_IPC = 3,
AWAITS_RESUME = 4,
AWAITS_RESTART = 4,
AWAITS_SIGNAL = 5,
AWAITS_SIGNAL_CONTEXT_KILL = 6,
STOPPED = 7,
DEAD = 7,
};
Thread_event _fault;
@ -106,6 +108,7 @@ class Kernel::Thread
Signal_receiver * _signal_receiver;
char const * const _label;
capid_t _timeout_sigid = 0;
bool _paused = false;
void _init();
@ -161,22 +164,10 @@ class Kernel::Thread
*/
void _deactivate_used_shares();
/**
* Pause execution
*/
void _pause();
/**
* Suspend unrecoverably from execution
*/
void _stop();
/**
* Cancel blocking if possible
*
* \return wether thread was in a cancelable blocking beforehand
*/
bool _resume();
void _die();
/**
* Handle an exception thrown by the memory management unit
@ -193,6 +184,10 @@ class Kernel::Thread
*/
size_t _core_to_kernel_quota(size_t const quota) const;
void _cancel_blocking();
bool _restart();
/*********************************************************
** Kernel-call back-ends, see kernel-interface headers **
@ -201,10 +196,11 @@ class Kernel::Thread
void _call_new_thread();
void _call_thread_quota();
void _call_start_thread();
void _call_pause_current_thread();
void _call_stop_thread();
void _call_pause_thread();
void _call_resume_thread();
void _call_resume_local_thread();
void _call_cancel_thread_blocking();
void _call_restart_thread();
void _call_yield_thread();
void _call_await_request_msg();
void _call_send_request_msg();
@ -281,7 +277,7 @@ class Kernel::Thread
** Cpu_domain_update **
***********************/
void _cpu_domain_update_unblocks() { _resume(); }
void _cpu_domain_update_unblocks() { _restart(); }
public:

View File

@ -178,7 +178,7 @@ namespace Genode {
inline Rom_fs *rom_fs() { return &_rom_fs; }
inline void wait_for_exit() {
while (1) { Kernel::pause_current_thread(); } };
while (1) { Kernel::stop_thread(); } };
bool supports_direct_unmap() const { return 1; }

View File

@ -147,7 +147,8 @@ namespace Genode {
/**
* Cancel currently blocking operation
*/
void cancel_blocking() { resume(); }
void cancel_blocking() {
Kernel::cancel_thread_blocking(kernel_object()); }
/**
* Set CPU quota of the thread to 'quota'

View File

@ -110,35 +110,6 @@ void Thread::_await_request_failed()
}
bool Thread::_resume()
{
switch (_state) {
case AWAITS_RESUME:
_become_active();
return true;
case AWAITS_IPC:
Ipc_node::cancel_waiting();
return true;
case AWAITS_SIGNAL:
Signal_handler::cancel_waiting();
user_arg_0(-1);
_become_active();
return true;
case AWAITS_SIGNAL_CONTEXT_KILL:
Signal_context_killer::cancel_waiting();
return true;
default:
return false;
}
}
void Thread::_pause()
{
assert(_state == AWAITS_RESUME || _state == ACTIVE);
_become_inactive(AWAITS_RESUME);
}
void Thread::_deactivate_used_shares()
{
Cpu_job::_deactivate_own_share();
@ -155,32 +126,25 @@ void Thread::_activate_used_shares()
void Thread::_become_active()
{
if (_state != ACTIVE) { _activate_used_shares(); }
if (_state != ACTIVE && !_paused) { _activate_used_shares(); }
_state = ACTIVE;
}
void Thread::_become_inactive(State const s)
{
if (_state == ACTIVE) { _deactivate_used_shares(); }
if (_state == ACTIVE && !_paused) { _deactivate_used_shares(); }
_state = s;
}
void Thread::_stop() { _become_inactive(STOPPED); }
void Thread::_die() { _become_inactive(DEAD); }
Cpu_job * Thread::helping_sink() {
return static_cast<Thread *>(Ipc_node::helping_sink()); }
void Thread::_receive_yielded_cpu()
{
if (_state == AWAITS_RESUME) { _become_active(); }
else { Genode::warning("failed to receive yielded CPU"); }
}
void Thread::proceed(unsigned const cpu) { mtc()->switch_to_user(this, cpu); }
@ -236,31 +200,90 @@ void Thread::_call_start_thread()
}
void Thread::_call_pause_current_thread() { _pause(); }
void Thread::_call_pause_thread() {
reinterpret_cast<Thread*>(user_arg_1())->_pause(); }
void Thread::_call_resume_thread() {
user_arg_0(reinterpret_cast<Thread*>(user_arg_1())->_resume()); }
void Thread::_call_resume_local_thread()
void Thread::_call_pause_thread()
{
if (!pd()) return;
Thread &thread = *reinterpret_cast<Thread*>(user_arg_1());
if (thread._state == ACTIVE && !thread._paused) {
thread._deactivate_used_shares(); }
thread._paused = true;
}
void Thread::_call_resume_thread()
{
Thread &thread = *reinterpret_cast<Thread*>(user_arg_1());
if (thread._state == ACTIVE && thread._paused) {
thread._activate_used_shares(); }
thread._paused = false;
}
void Thread::_call_stop_thread()
{
assert(_state == ACTIVE);
_become_inactive(AWAITS_RESTART);
}
void Thread::_call_restart_thread()
{
if (!pd()) {
return; }
/* lookup thread */
Thread * const thread = pd()->cap_tree().find<Thread>(user_arg_1());
if (!thread || pd() != thread->pd()) {
warning(*this, ": failed to lookup thread ", (unsigned)user_arg_1(),
" to resume it");
_stop();
" to restart it");
_die();
return;
}
user_arg_0(thread->_restart());
}
bool Thread::_restart()
{
assert(_state == ACTIVE || _state == AWAITS_RESTART);
if (_state != AWAITS_RESTART) { return false; }
_become_active();
return true;
}
void Thread::_call_cancel_thread_blocking()
{
reinterpret_cast<Thread*>(user_arg_1())->_cancel_blocking();
}
void Thread::_cancel_blocking()
{
switch (_state) {
case AWAITS_RESTART:
_become_active();
return;
case AWAITS_IPC:
Ipc_node::cancel_waiting();
return;
case AWAITS_SIGNAL:
Signal_handler::cancel_waiting();
user_arg_0(-1);
_become_active();
return;
case AWAITS_SIGNAL_CONTEXT_KILL:
Signal_context_killer::cancel_waiting();
return;
case ACTIVE:
return;
case DEAD:
Genode::error("can't cancel blocking of dead thread");
return;
case AWAITS_START:
Genode::error("can't cancel blocking of not yet started thread");
return;
}
/* resume thread */
user_arg_0(thread->_resume());
}
@ -273,8 +296,6 @@ void Thread_event::submit() { if (_signal_context) _signal_context->submit(1); }
void Thread::_call_yield_thread()
{
Thread * const t = pd()->cap_tree().find<Thread>(user_arg_1());
if (t) { t->_receive_yielded_cpu(); }
Cpu_job::_yield();
}
@ -525,8 +546,8 @@ void Thread::_call()
switch (call_id) {
case call_id_update_data_region(): _call_update_data_region(); return;
case call_id_update_instr_region(): _call_update_instr_region(); return;
case call_id_pause_current_thread(): _call_pause_current_thread(); return;
case call_id_resume_local_thread(): _call_resume_local_thread(); return;
case call_id_stop_thread(): _call_stop_thread(); return;
case call_id_restart_thread(): _call_restart_thread(); return;
case call_id_yield_thread(): _call_yield_thread(); return;
case call_id_send_request_msg(): _call_send_request_msg(); return;
case call_id_send_reply_msg(): _call_send_reply_msg(); return;
@ -545,7 +566,7 @@ void Thread::_call()
/* check wether this is a core thread */
if (!_core()) {
Genode::warning(*this, ": not entitled to do kernel call");
_stop();
_die();
return;
}
}
@ -556,6 +577,7 @@ void Thread::_call()
case call_id_delete_thread(): _call_delete<Thread>(); return;
case call_id_start_thread(): _call_start_thread(); return;
case call_id_resume_thread(): _call_resume_thread(); return;
case call_id_cancel_thread_blocking(): _call_cancel_thread_blocking(); return;
case call_id_route_thread_event(): _call_route_thread_event(); return;
case call_id_update_pd(): _call_update_pd(); return;
case call_id_new_pd():
@ -582,7 +604,7 @@ void Thread::_call()
case call_id_delete_obj(): _call_delete_obj(); return;
default:
Genode::warning(*this, ": unknown kernel call");
_stop();
_die();
return;
}
} catch (Genode::Allocator::Out_of_memory &e) { user_arg_0(-2); }

View File

@ -41,14 +41,14 @@ void Thread::exception(unsigned const cpu)
if (_cpu->retry_undefined_instr(*this)) { return; }
Genode::warning(*this, ": undefined instruction at ip=",
Genode::Hex(ip));
_stop();
_die();
return;
case RESET:
return;
default:
Genode::warning(*this, ": triggered an unknown exception ",
cpu_exception);
_stop();
_die();
return;
}
}
@ -56,7 +56,7 @@ void Thread::exception(unsigned const cpu)
void Thread::_mmu_exception()
{
_become_inactive(AWAITS_RESUME);
_become_inactive(AWAITS_RESTART);
if (in_fault(_fault_addr, _fault_writes)) {
_fault_pd = (addr_t)_pd->platform_pd();
_fault_signal = (addr_t)_fault.signal_context();
@ -141,6 +141,6 @@ void Thread_event::_signal_acknowledged()
* functions.
*/
cpu_pool()->cpu(Cpu::executing_id())->translation_table_insertions();
_thread->_resume();
_thread->_restart();
}

View File

@ -18,5 +18,6 @@
void Kernel::Thread::_call_update_pd()
{
Pd * const pd = (Pd *) user_arg_1();
if (Cpu_domain_update::_do_global(pd->asid)) { _pause(); }
if (Cpu_domain_update::_do_global(pd->asid)) {
_become_inactive(AWAITS_RESTART); }
}

View File

@ -37,14 +37,14 @@ void Thread::exception(unsigned const cpu)
default:
Genode::warning(*this, ": unhandled exception ", cpu_exception,
" at ip=", (void*)ip, " addr=", Cpu::sbadaddr());
_stop();
_die();
}
}
void Thread::_mmu_exception()
{
_become_inactive(AWAITS_RESUME);
_become_inactive(AWAITS_RESTART);
_fault_pd = (addr_t)_pd->platform_pd();
_fault_signal = (addr_t)_fault.signal_context();
_fault_addr = Cpu::sbadaddr();
@ -70,5 +70,5 @@ void Thread::_call_update_instr_region() { }
void Thread_event::_signal_acknowledged()
{
_thread->_resume();
_thread->_restart();
}

View File

@ -24,12 +24,12 @@ void Kernel::Thread::_call_update_data_region() { }
void Kernel::Thread::_call_update_instr_region() { }
void Kernel::Thread_event::_signal_acknowledged() { _thread->_resume(); }
void Kernel::Thread_event::_signal_acknowledged() { _thread->_restart(); }
void Kernel::Thread::_mmu_exception()
{
_become_inactive(AWAITS_RESUME);
_become_inactive(AWAITS_RESTART);
_fault_pd = (addr_t)_pd->platform_pd();
_fault_signal = (addr_t)_fault.signal_context();
_fault_addr = Cpu::Cr2::read();

View File

@ -27,11 +27,11 @@ void Thread::exception(unsigned const cpu)
case NO_MATH_COPROC:
if (_cpu->fpu().fault(*this)) { return; }
Genode::warning(*this, ": FPU error");
_stop();
_die();
return;
case UNDEFINED_INSTRUCTION:
Genode::warning(*this, ": undefined instruction at ip=", (void*)ip);
_stop();
_die();
return;
case SUPERVISOR_CALL:
_call();
@ -43,5 +43,5 @@ void Thread::exception(unsigned const cpu)
}
Genode::warning(*this, ": triggered unknown exception ", trapno,
" with error code ", errcode, " at ip=%p", (void*)ip);
_stop();
_die();
}

View File

@ -27,11 +27,11 @@ void Thread::exception(unsigned const cpu)
case NO_MATH_COPROC:
if (_cpu->fpu().fault(*this)) { return; }
Genode::warning(*this, ": FPU error");
_stop();
_die();
return;
case UNDEFINED_INSTRUCTION:
Genode::warning(*this, ": undefined instruction at ip=", (void*)ip);
_stop();
_die();
return;
case SUPERVISOR_CALL:
_call();
@ -44,5 +44,5 @@ void Thread::exception(unsigned const cpu)
}
Genode::warning(*this, ": triggered unknown exception ", trapno,
" with error code ", errcode, " at ip=%p", (void*)ip);
_stop();
_die();
}

View File

@ -26,8 +26,7 @@ namespace Hw { extern Genode::Untyped_capability _main_thread_cap; }
/**
* Yield execution time-slice of current thread
*/
static inline void thread_yield() {
Kernel::yield_thread(Kernel::cap_id_invalid()); }
static inline void thread_yield() { Kernel::yield_thread(); }
/**
@ -46,7 +45,7 @@ native_thread_id(Genode::Thread * const t)
*/
static inline void thread_switch_to(Genode::Thread * const t)
{
Kernel::yield_thread(native_thread_id(t));
Kernel::yield_thread();
}
@ -56,14 +55,14 @@ static inline void thread_switch_to(Genode::Thread * const t)
static inline bool
thread_check_stopped_and_restart(Genode::Thread * const t)
{
return Kernel::resume_local_thread(native_thread_id(t));
return Kernel::restart_thread(native_thread_id(t));
}
/**
* Pause execution of current thread
*/
static inline void thread_stop_myself() { Kernel::pause_current_thread(); }
static inline void thread_stop_myself() { Kernel::stop_thread(); }
#endif /* _INCLUDE__BASE__INTERNAL__LOCK_HELPER_H_ */