genode/repos/os/src/server/nic_dump/interface.h

97 lines
2.4 KiB
C
Raw Normal View History

/*
* \brief A net interface in form of a signal-driven NIC-packet handler
* \author Martin Stein
* \date 2017-03-08
*/
/*
* Copyright (C) 2016-2017 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _INTERFACE_H_
#define _INTERFACE_H_
/* local includes */
#include "pointer.h"
#include "packet_log.h"
/* Genode includes */
#include <nic_session/nic_session.h>
#include <util/string.h>
os/timer: interpolate time via timestamps Previously, the Genode::Timer::curr_time always used the Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads this remote time only in a periodic fashion independently from the calls to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time, the function takes the last read remote time value and adapts it using the timestamp difference since the remote-time read. The conversion factor from timestamps to time is estimated on every remote-time read using the last read remote-time value and the timestamp difference since the last remote time read. This commit also re-works the timeout test. The test now has two stages. In the first stage, it tests fast polling of the Genode::Timer::curr_time. This stage checks the error between locally interpolated and timer-driver time as well as wether the locally interpolated time is monotone and sufficiently homogeneous. In the second stage several periodic and one-shot timeouts are scheduled at once. This stage checks if the timeouts trigger sufficiently precise. This commit adds the new Kernel::time syscall to base-hw. The syscall is solely used by the Genode::Timer on base-hw as substitute for the timestamp. This is because on ARM, the timestamp function uses the ARM performance counter that stops counting when the WFI (wait for interrupt) instruction is active. This instruction, however is used by the base-hw idle contexts that get active when no user thread needs to be scheduled. Thus, the ARM performance counter is not a good choice for time interpolation and we use the kernel internal time instead. With this commit, the timeout library becomes a basic library. That means that it is linked against the LDSO which then provides it to the program it serves. Furthermore, you can't use the timeout library anymore without the LDSO because through the kernel-dependent LDSO make-files we can achieve a kernel-dependent timeout implementation. This commit introduces a structured Duration type that shall successively replace the use of Microseconds, Milliseconds, and integer types for duration values. Open issues: * The timeout test fails on Raspberry PI because of precision errors in the first stage. However, this does not render the framework unusable in general on the RPI but merely is an issue when speaking of microseconds precision. * If we run on ARM with another Kernel than HW the timestamp speed may continuously vary from almost 0 up to CPU speed. The Timer, however, only uses interpolation if the timestamp speed remained stable (12.5% tolerance) for at least 3 observation periods. Currently, one period is 100ms, so its 300ms. As long as this is not the case, Timer_session::elapsed_ms is called instead. Anyway, it might happen that the CPU load was stable for some time so interpolation becomes active and now the timestamp speed drops. In the worst case, we would now have 100ms of slowed down time. The bad thing about it would be, that this also affects the timeout of the period. Thus, it might "freeze" the local time for more than 100ms. On the other hand, if the timestamp speed suddenly raises after some stable time, interpolated time can get too fast. This would shorten the period but nonetheless may result in drifting away into the far future. Now we would have the problem that we can't deliver the real time anymore until it has caught up because the output of Timer::curr_time shall be monotone. So, effectively local time might "freeze" again for more than 100ms. It would be a solution to not use the Trace::timestamp on ARM w/o HW but a function whose return value causes the Timer to never use interpolation because of its stability policy. Fixes #2400
2017-04-22 00:52:23 +02:00
#include <timer_session/connection.h>
namespace Genode { class Xml_node; }
namespace Net {
using Packet_descriptor = ::Nic::Packet_descriptor;
using Packet_stream_sink = ::Nic::Packet_stream_sink< ::Nic::Session::Policy>;
using Packet_stream_source = ::Nic::Packet_stream_source< ::Nic::Session::Policy>;
class Ethernet_frame;
class Interface;
using Interface_label = Genode::String<64>;
}
class Net::Interface
{
protected:
using Signal_handler = Genode::Signal_handler<Interface>;
Signal_handler _sink_ack;
Signal_handler _sink_submit;
Signal_handler _source_ack;
Signal_handler _source_submit;
private:
Genode::Allocator &_alloc;
Pointer<Interface> _remote { };
Interface_label _label;
Timer::Connection &_timer;
Genode::Duration &_curr_time;
bool _log_time;
Packet_log_style const _default_log_style;
Packet_log_config const _log_cfg;
void _send(Ethernet_frame &eth, Genode::size_t const eth_size);
void _handle_eth(void *const eth_base,
Genode::size_t const eth_size,
Packet_descriptor const &pkt);
virtual Packet_stream_sink &_sink() = 0;
virtual Packet_stream_source &_source() = 0;
/***********************************
** Packet-stream signal handlers **
***********************************/
void _ready_to_submit();
void _ack_avail() { }
void _ready_to_ack();
void _packet_avail() { }
public:
Interface(Genode::Entrypoint &ep,
Interface_label label,
os/timer: interpolate time via timestamps Previously, the Genode::Timer::curr_time always used the Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads this remote time only in a periodic fashion independently from the calls to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time, the function takes the last read remote time value and adapts it using the timestamp difference since the remote-time read. The conversion factor from timestamps to time is estimated on every remote-time read using the last read remote-time value and the timestamp difference since the last remote time read. This commit also re-works the timeout test. The test now has two stages. In the first stage, it tests fast polling of the Genode::Timer::curr_time. This stage checks the error between locally interpolated and timer-driver time as well as wether the locally interpolated time is monotone and sufficiently homogeneous. In the second stage several periodic and one-shot timeouts are scheduled at once. This stage checks if the timeouts trigger sufficiently precise. This commit adds the new Kernel::time syscall to base-hw. The syscall is solely used by the Genode::Timer on base-hw as substitute for the timestamp. This is because on ARM, the timestamp function uses the ARM performance counter that stops counting when the WFI (wait for interrupt) instruction is active. This instruction, however is used by the base-hw idle contexts that get active when no user thread needs to be scheduled. Thus, the ARM performance counter is not a good choice for time interpolation and we use the kernel internal time instead. With this commit, the timeout library becomes a basic library. That means that it is linked against the LDSO which then provides it to the program it serves. Furthermore, you can't use the timeout library anymore without the LDSO because through the kernel-dependent LDSO make-files we can achieve a kernel-dependent timeout implementation. This commit introduces a structured Duration type that shall successively replace the use of Microseconds, Milliseconds, and integer types for duration values. Open issues: * The timeout test fails on Raspberry PI because of precision errors in the first stage. However, this does not render the framework unusable in general on the RPI but merely is an issue when speaking of microseconds precision. * If we run on ARM with another Kernel than HW the timestamp speed may continuously vary from almost 0 up to CPU speed. The Timer, however, only uses interpolation if the timestamp speed remained stable (12.5% tolerance) for at least 3 observation periods. Currently, one period is 100ms, so its 300ms. As long as this is not the case, Timer_session::elapsed_ms is called instead. Anyway, it might happen that the CPU load was stable for some time so interpolation becomes active and now the timestamp speed drops. In the worst case, we would now have 100ms of slowed down time. The bad thing about it would be, that this also affects the timeout of the period. Thus, it might "freeze" the local time for more than 100ms. On the other hand, if the timestamp speed suddenly raises after some stable time, interpolated time can get too fast. This would shorten the period but nonetheless may result in drifting away into the far future. Now we would have the problem that we can't deliver the real time anymore until it has caught up because the output of Timer::curr_time shall be monotone. So, effectively local time might "freeze" again for more than 100ms. It would be a solution to not use the Trace::timestamp on ARM w/o HW but a function whose return value causes the Timer to never use interpolation because of its stability policy. Fixes #2400
2017-04-22 00:52:23 +02:00
Timer::Connection &timer,
Genode::Duration &curr_time,
bool log_time,
Genode::Allocator &alloc,
Genode::Xml_node config);
Follow practices suggested by "Effective C++" The patch adjust the code of the base, base-<kernel>, and os repository. To adapt existing components to fix violations of the best practices suggested by "Effective C++" as reported by the -Weffc++ compiler argument. The changes follow the patterns outlined below: * A class with virtual functions can no longer publicly inherit base classed without a vtable. The inherited object may either be moved to a member variable, or inherited privately. The latter would be used for classes that inherit 'List::Element' or 'Avl_node'. In order to enable the 'List' and 'Avl_tree' to access the meta data, the 'List' must become a friend. * Instead of adding a virtual destructor to abstract base classes, we inherit the new 'Interface' class, which contains a virtual destructor. This way, single-line abstract base classes can stay as compact as they are now. The 'Interface' utility resides in base/include/util/interface.h. * With the new warnings enabled, all member variables must be explicitly initialized. Basic types may be initialized with '='. All other types are initialized with braces '{ ... }' or as class initializers. If basic types and non-basic types appear in a row, it is nice to only use the brace syntax (also for basic types) and align the braces. * If a class contains pointers as members, it must now also provide a copy constructor and assignment operator. In the most cases, one would make them private, effectively disallowing the objects to be copied. Unfortunately, this warning cannot be fixed be inheriting our existing 'Noncopyable' class (the compiler fails to detect that the inheriting class cannot be copied and still gives the error). For now, we have to manually add declarations for both the copy constructor and assignment operator as private class members. Those declarations should be prepended with a comment like this: /* * Noncopyable */ Thread(Thread const &); Thread &operator = (Thread const &); In the future, we should revisit these places and try to replace the pointers with references. In the presence of at least one reference member, the compiler would no longer implicitly generate a copy constructor. So we could remove the manual declaration. Issue #465
2017-12-21 15:42:15 +01:00
virtual ~Interface() { }
void remote(Interface &remote) { _remote.set(remote); }
};
#endif /* _INTERFACE_H_ */