parent
0dece91973
commit
b8e6124594
|
@ -787,251 +787,3 @@ contains its RM session and RAM session.
|
||||||
; supplying the parent capability to the new process
|
; supplying the parent capability to the new process
|
||||||
|
|
||||||
|
|
||||||
Framework infrastructure
|
|
||||||
########################
|
|
||||||
|
|
||||||
Apart from the very fundamental mechanisms implemented by core, all
|
|
||||||
higher-level services have to be implemented as part of the process tree on top
|
|
||||||
of core.
|
|
||||||
There are a number of frameworks at hand that provide convenient interfaces to
|
|
||||||
be used by such components.
|
|
||||||
In this section, we outline the most important frameworks.
|
|
||||||
|
|
||||||
|
|
||||||
Communication
|
|
||||||
=============
|
|
||||||
|
|
||||||
The basic mode of operation of our RPC framework is based on C++ streams.
|
|
||||||
It uses four different stream classes: 'Ipc_ostream' for sending messages,
|
|
||||||
'Ipc_istream' for receiving messages, 'Ipc_client' for performing RPC calls,
|
|
||||||
and 'Ipc_server' for dispatching RPC calls. In the following, we use
|
|
||||||
illustrative examples.
|
|
||||||
|
|
||||||
:Sending a message:
|
|
||||||
|
|
||||||
!Ipc_ostream sender(dst, &snd_buf);
|
|
||||||
!sender << a << b << IPC_SEND;
|
|
||||||
The object 'sender' is an output stream that is initialized with a
|
|
||||||
communication endpoint ('dst') and a message buffer ('snd_buf').
|
|
||||||
For sending the message, we sequentially insert both arguments into the stream
|
|
||||||
to transform the arguments to a message and finally invoke the IPC mechanism of
|
|
||||||
the kernel by inserting the special object 'IPC_SEND'.
|
|
||||||
|
|
||||||
:Receiving a message:
|
|
||||||
|
|
||||||
!int a, b;
|
|
||||||
!Ipc_istream receiver(&rcv_buf);
|
|
||||||
!receiver >> IPC_WAIT >> a >> b;
|
|
||||||
For creating the 'receiver' input stream object, we specify a receive message
|
|
||||||
buffer as argument that can hold one incoming message.
|
|
||||||
By extracting the special object 'IPC_WAIT' from the receiver, we block for
|
|
||||||
a new message to be stored into 'rcv_buf'.
|
|
||||||
After returning from the blocking receive operation, we use the extraction
|
|
||||||
operator to _unmarshal_ the message argument by argument.
|
|
||||||
|
|
||||||
:Performing a RPC call:
|
|
||||||
|
|
||||||
!Ipc_client client(dst, &snd_buf, &rcv_buf);
|
|
||||||
!int result;
|
|
||||||
!client << OPCODE_FUNC1 << 1 << 2
|
|
||||||
! << IPC_CALL >> result;
|
|
||||||
The first argument is a constant that references one among
|
|
||||||
many server functions.
|
|
||||||
It is followed by the actual server-function arguments.
|
|
||||||
All arguments are marshalled into the 'snd_buf'.
|
|
||||||
When inserting the special object 'IPC_CALL' into the 'client'
|
|
||||||
stream, the client blocks for the result of the RPC.
|
|
||||||
After receiving the result message in 'rcv_buf', the RPC
|
|
||||||
results can be sequentially unmarshalled via the extraction
|
|
||||||
operator. Note that 'rcv_buf' and 'snd_buf' may use the
|
|
||||||
same backing store as both buffers are used interleaved.
|
|
||||||
|
|
||||||
:Dispatching a RPC call:
|
|
||||||
|
|
||||||
!Ipc_server server(&snd_buf, &rcv_buf);
|
|
||||||
!while (1) {
|
|
||||||
! int opcode;
|
|
||||||
! server >> IPC_REPLY_WAIT >> opcode;
|
|
||||||
! switch (opcode) {
|
|
||||||
! case OPCODE_FUNC1:
|
|
||||||
! {
|
|
||||||
! int a, b, ret;
|
|
||||||
! server >> a >> b;
|
|
||||||
! server << func1(a, b);
|
|
||||||
! break;
|
|
||||||
! }
|
|
||||||
! ..
|
|
||||||
! }
|
|
||||||
!}
|
|
||||||
The special object 'IPC_REPLY_WAIT' replies to the request of the previous
|
|
||||||
server-loop iteration with the message stored in 'snd_buf' (ignored for the
|
|
||||||
first iteration) and then waits for an incoming RPC request to be received
|
|
||||||
in 'rcv_buf'.
|
|
||||||
By convention, the first message argument contains the opcode to identify
|
|
||||||
the server function to handle the request.
|
|
||||||
After extracting the opcode from the 'server' stream, we branch into
|
|
||||||
a server-function-specific wrapper that reads the function arguments, calls the
|
|
||||||
actual server function, and inserts the function result into the 'server' stream.
|
|
||||||
The result message is to be delivered at the beginning of the next server-loop
|
|
||||||
iteration.
|
|
||||||
The two-stage argument-message parsing (the opcode to select the server function,
|
|
||||||
reading the server-function arguments) is simply done by subsequent extraction
|
|
||||||
operations.
|
|
||||||
|
|
||||||
|
|
||||||
Server framework
|
|
||||||
================
|
|
||||||
|
|
||||||
[image server_framework]
|
|
||||||
Relationships between the classes of the server object framework
|
|
||||||
|
|
||||||
Each component that makes local objects remotely accessible to other components
|
|
||||||
has to provide means to dispatch RPC requests that refer to different objects.
|
|
||||||
This procedure highly depends on the mechanisms provided by the
|
|
||||||
underlying kernel.
|
|
||||||
The primary motivation of the server framework is to hide actual kernel
|
|
||||||
paradigms for communication, control flow, and the implementation of
|
|
||||||
local names (capabilities) behind a generic interface.
|
|
||||||
The server framework unifies the control flow of RPC dispatching and the mapping
|
|
||||||
between capabilities and local objects using the classes depicted in Figure [server_framework].
|
|
||||||
|
|
||||||
:'Object_pool': is an associative array that maps capabilities from/to local objects.
|
|
||||||
Because capabilities are protected kernel objects, the object pool's functionality
|
|
||||||
is supported by the kernel.
|
|
||||||
|
|
||||||
*Note:* _On L4v2 and Linux, capabilities are not protected by the kernel but are_
|
|
||||||
_implemented as unique IDs. On these base platforms, the object pool performs_
|
|
||||||
_the simple mapping of such unique IDs to object pointers in the local_
|
|
||||||
_address space._
|
|
||||||
|
|
||||||
:'Server_object': is an object-pool entry that contains a dispatch function.
|
|
||||||
To make a local object type available to remote components, the local
|
|
||||||
object type must inherit from 'Server_object' and provide the implementation
|
|
||||||
of the dispatch function as described in Section [Communication].
|
|
||||||
|
|
||||||
:'Server_entrypoint': is an object pool that acts as a logical communication entrypoint.
|
|
||||||
It can manage any number of server objects. When registering a server object to be
|
|
||||||
managed by a server entrypoint ('manage' method), a capability for this object
|
|
||||||
gets created. This capability can be communicated to other processes,
|
|
||||||
which can then use the server object's RPC interface.
|
|
||||||
|
|
||||||
:'Server_activation': is the stack (or thread) to be used for handling RPC requests
|
|
||||||
of an entrypoint.
|
|
||||||
|
|
||||||
*Note:* _On L4v2 and Linux, exactly one server activation must be attached to_
|
|
||||||
_a server entrypoint. This implicates that RPC requests are handled in a_
|
|
||||||
_strictly serialized manner and one blocking server function delays all_
|
|
||||||
_other pending RPC requests referring the same server entrypoint. Concurrent handling_
|
|
||||||
_of RPC requests should be realized with multiple (completely independent)_
|
|
||||||
_server entrypoints._
|
|
||||||
|
|
||||||
|
|
||||||
Process environment
|
|
||||||
===================
|
|
||||||
|
|
||||||
As described in Section [Interfaces and Mechanisms], a newly created process
|
|
||||||
can only communicate to its immediate parent via its parent capability.
|
|
||||||
This parent capability gets created "magically" dependent on the actual
|
|
||||||
platform.
|
|
||||||
|
|
||||||
| For example, on the L4v2 platform, the parent writes the information about
|
|
||||||
| the parent capability to a defined position of the new process' address space
|
|
||||||
| after decoding the ELF image. On the Linux platform, the parent
|
|
||||||
| uses environment variables to communicate the parent capability to the
|
|
||||||
| child.
|
|
||||||
|
|
||||||
Before entering the 'main' function of the new process, the process' startup
|
|
||||||
code 'crt0' is executed and initializes the _environment_ framework.
|
|
||||||
The environment contains RPC communication stubs for communicating with the
|
|
||||||
parent and the process' RM session, CPU session, PD session, and RAM
|
|
||||||
session.
|
|
||||||
Furthermore, the environment contains a heap that uses the process' RAM
|
|
||||||
session as backing store.
|
|
||||||
The environment can be used from the actual program by dereferencing the pointer
|
|
||||||
returned by the global function:
|
|
||||||
! Env *env();
|
|
||||||
|
|
||||||
|
|
||||||
Child management
|
|
||||||
================
|
|
||||||
|
|
||||||
The class 'Child' provides a generic and extensible abstraction to unify the
|
|
||||||
creation of child processes, serve parent-interface requests, and to perform the
|
|
||||||
book keeping of open sessions.
|
|
||||||
Different access-control and resource-trading policies can be realized by
|
|
||||||
inheriting from this class and supplementing suitable parent-interface server
|
|
||||||
functions.
|
|
||||||
|
|
||||||
A child process can be created by instantiating a 'Child' object:
|
|
||||||
!Child(const char *name,
|
|
||||||
! Dataspace_capability elf_ds_cap,
|
|
||||||
! Ram_session_capability ram_session_cap,
|
|
||||||
! Cpu_session_capability cpu_session_cap,
|
|
||||||
! Cap_session *cap_session,
|
|
||||||
! char *args[])
|
|
||||||
|
|
||||||
*NOTE:* _The 'name' parameter is only used for debugging._
|
|
||||||
_The 'args' parameter is not yet supported._
|
|
||||||
|
|
||||||
;The 'Child' serves the parent interface for the new process by a distinct thread.
|
|
||||||
|
|
||||||
|
|
||||||
Heap partitioning
|
|
||||||
=================
|
|
||||||
|
|
||||||
In Section [Goals and Challenges] where we introduced the different types of
|
|
||||||
components composing our system, we highlighted _resource multiplexers_ as
|
|
||||||
being critical for maintaining the isolation and independence of applications from
|
|
||||||
each other.
|
|
||||||
If a flawed resource multiplexer serves multiple clients at a time, information
|
|
||||||
may leak from one client to another (corrupting isolation) or different clients
|
|
||||||
may interfere in sharing limited physical resources (corrupting independence).
|
|
||||||
One particular limited resource that is typically shared among all
|
|
||||||
clients is the heap of the server.
|
|
||||||
If the server performs heap allocations on behalf of one client, this resource
|
|
||||||
may exhaust and renders the service unavailable to all other clients (denial of
|
|
||||||
service).
|
|
||||||
The resource-trading concept as presented in Section [Quota] enables clients to
|
|
||||||
donate memory quota to a server during the use of a session.
|
|
||||||
If the server's parent closes the session on request of the client, the donated
|
|
||||||
resources must be released by the server.
|
|
||||||
In order to comply with the request to avoid intervention by its parent, the
|
|
||||||
server must store the state of each session on dedicated dataspaces that can be
|
|
||||||
released independently from other sessions.
|
|
||||||
Instead of using one heap to hold anonymous memory allocations, the server
|
|
||||||
creates a _heap partition_ for each client and performs client-specific
|
|
||||||
allocations exclusively on the corresponding heap partition.
|
|
||||||
There exist two different classes to assist developers in partitioning the heap:
|
|
||||||
:'Heap': is an allocator that allocates chunks of memory as dataspaces from a
|
|
||||||
RAM session. Each chunk may hold multiple allocations. This kind of heap
|
|
||||||
corresponds loosely to a classical heap and can be used to allocate a high
|
|
||||||
number of small memory objects. The used backing store gets released on the
|
|
||||||
destruction of the heap.
|
|
||||||
:'Sliced_heap': is an allocator that uses a dedicated dataspace for each
|
|
||||||
allocation. Therefore, each allocated block can be released independently
|
|
||||||
from all other allocations.
|
|
||||||
The 'Sliced_heap' must be used to obtain the actual session objects and store
|
|
||||||
them in independent dataspaces.
|
|
||||||
Dynamic memory allocations during the lifetime of a session must be performed by
|
|
||||||
a 'Heap' as member of the session object.
|
|
||||||
When closing a session, the session object including the heap partition gets
|
|
||||||
destroyed and all backing-store dataspaces can be released without interfering
|
|
||||||
other clients.
|
|
||||||
|
|
||||||
|
|
||||||
Limitations and Outlook
|
|
||||||
#######################
|
|
||||||
|
|
||||||
In its current incarnation, the design is subject to a number of limitations.
|
|
||||||
As a prime example for managing resources, we focused our work on physical
|
|
||||||
memory and ignored other prominent resource types such as processing time, bus
|
|
||||||
bandwidth, and network bandwidth.
|
|
||||||
We intend to apply the same methodology that we developed for physical memory to
|
|
||||||
other resource types analogously in later design revisions.
|
|
||||||
We do not cover features such as virtual-memory or transparent
|
|
||||||
copy-on-write support, which we regard as non-essential at the current stage.
|
|
||||||
At this point, we also do not provide specifics about the device-driver
|
|
||||||
infrastructure and legacy-software containers.
|
|
||||||
Note that the presented design does not fundamentally prevent the support
|
|
||||||
of these features.
|
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue