genode/repos/dde_rump
Christian Prochaska b0935ef9b2 VFS: nonblocking interface
The VFS library can be used in single-threaded or multi-threaded
environments and depending on that, signals are handled by the same thread
which uses the VFS library or possibly by a different thread. If a VFS
plugin needs to block to wait for a signal, there is currently no way
which works reliably in both environments.

For this reason, this commit makes the interface of the VFS library
nonblocking, similar to the File_system session interface.

The most important changes are:

- Directories are created and opened with the 'opendir()' function and the
  directory entries are read with the recently introduced 'queue_read()'
  and 'complete_read()' functions.

- Symbolic links are created and opened with the 'openlink()' function and
  the link target is read with the 'queue_read()' and 'complete_read()'
  functions and written with the 'write()' function.

- The 'write()' function does not wait for signals anymore. This can have
  the effect that data written by a VFS library user has not been
  processed by a file system server yet when the library user asks for the
  size of the file or closes it (both done with RPC functions at the file
  system server). For this reason, a user of the VFS library should
  request synchronization before calling 'stat()' or 'close()'. To make
  sure that a file system server has processed all write request packets
  which a client submitted before the synchronization request,
  synchronization is now requested at the file system server with a
  synchronization packet instead of an RPC function. Because of this
  change, the synchronization interface of the VFS library is now split
  into 'queue_sync()' and 'complete_sync()' functions.

Fixes #2399
2017-08-28 16:49:38 +02:00
..
include Streamline exception types 2017-05-31 13:16:07 +02:00
lib rump_fs: Transition to new API 2017-01-13 13:07:13 +01:00
patches dde_rump: don't build non-Genode shared libraries 2015-05-26 09:39:48 +02:00
ports dde_rump: don't build non-Genode shared libraries 2015-05-26 09:39:48 +02:00
run run template for block-backed VFS plugins 2017-08-17 11:04:22 +02:00
src VFS: nonblocking interface 2017-08-28 16:49:38 +02:00
README Make label prefixing more strict 2016-11-30 13:37:07 +01:00

                             ================================
                             Genode's Rump Kernel kernel port
                             ================================

This repository contains the Genode version of the [http://wiki.netbsd.org/rumpkernel/ - rump kernel].
The kernel is currently used to gain file-system access from within Genode. In
order to achieve that, a Genode file-system server is located at
_src/server/rump_fs_. For accessing the server through the libc, the _libc_fs_
plugin can be facilitated, which is available in the _libports_ repository.

Building instructions
#####################

In order to build the file-system server, issue

! ./tool/ports/prepare_port dde_rump

from Genode's toplevel directory.


Add

! REPOSITORIES += $(GENODE_DIR)/repos/dde_rump

to your _etc/build.conf_ file of you build directory.

Finally,

! make server/rumps_fs

called from your build directory will build the server. You may also specify

! make run/rump_ext2

to run a simple test scenario.


Configuration
#############

Here is an example snippet that configures the server:

!<start name="rump_fs">
!  <resource name="RAM" quantum="8M" />
!  <provides><service name="File_system"/></provides>
!  <config fs="ext2fs"><default-policy root="/" writeable="yes"/></config>
!</start>

The server is looking for a service that provides a Genode block session. If
there is more than one block session in the system, the block session must be
routed to the right block-session server. The value of the _fs_ attribute of
the _config_ node can be one of the following: _ext2fs_ for EXT2, _cd9660_ for
ISO-9660, or _msdos_ for FAT file-system support. _root_ defines the directory
of the file system as seen as root directory by the client. The server hands
most of its RAM quota to the rump kernel. This means the larger the quota is,
the larger the internal block caches of the rump kernel will be.