timer test: raise tolerance at max trigger count

Timing itself costs time. Thus, the stressfull timeout phase of the
test is not exactly as long as set but a little bit longer. This is why the
fast timeouts are able to trigger more often than they are expected to
(the timer has a static timeout-rate limit). Normally we consider this effect
through an error tolerance of 10%. But at least on foc x86_32 (PIT with very
low max timeout), timing is so expensive that 10% is not enough. We have to
raise it to 11%.
This commit is contained in:
Martin Stein 2017-10-23 15:07:21 +02:00 committed by Christian Helmuth
parent d4920eade4
commit 2eef27fca4
1 changed files with 1 additions and 1 deletions

View File

@ -79,7 +79,7 @@ struct Stress_test
enum { DURATION_US = DURATION_SEC * 1000 * 1000 };
enum { MIN_TIMER_PERIOD_US = 1000 };
enum { MAX_CNT_BASE = DURATION_US / MIN_TIMER_PERIOD_US };
enum { MAX_CNT_TOLERANCE = MAX_CNT_BASE / 10 };
enum { MAX_CNT_TOLERANCE = MAX_CNT_BASE / 9 };
enum { MAX_CNT = MAX_CNT_BASE + MAX_CNT_TOLERANCE };
enum { MIN_CNT = DURATION_US / MAX_SLV_PERIOD_US / 2 };