[LTP] [PATCH 1/2] lib: multiply the timeout if detect slow kconfigsD

Martin Doucha mdoucha@suse.cz
Mon Jan 6 17:03:20 CET 2025


On 06. 01. 25 13:10, Cyril Hrubis wrote:
> I stil think that misusing max_runtime, which is supposed to be upper
> bound for the actual test runtime was a mistake.
> 
> Maybe we should have called the max_runtime a timeout and add runtime
> for tests that needs it. That way we would have timeout compromising of
> two parts, one would be the 30s that is used for all tests and second
> part from the tst_test structure. And then the sum of these two would be
> multiplied by the timeout multipliers. Then we would have a runtime,
> which would be used only by tests that call tst_remaining_runtime().
> 
> The overall test timeout would be then:
> 
> (default_30s_timeout + tst_test->timeout) * TIMEOUT_MUL + tst_test->runtime * RUNTIME_MUL
> 
> What do you think?

Hi,
sorry but I still don't follow the logic in the math above. I agree that 
"runtime" should control test iteration and "timeout" should be a hard 
limit for test execution. But then it doesn't make sense to add these 
two numbers and RUNTIME_MUL would be pointless. Instead, the total 
timeout (for single testcase/filesystem test) should be calculated like 
this:

default_30s_timeout * TIMEOUT_MUL + MAX(MAX(1, tst_test->timeout) * 
TIMEOUT_MUL, tst_test->runtime)

If you want to force a different runtime value, it should be done 
through the -I command line parameter. We could also replace the 
"duration" logic in testrun() with tst_remaining_runtime() which will 
allow looping tests for fixed amount of time by default just by setting 
the tst_test->runtime attribute, without any loop code inside the test 
function itself.

-- 
Martin Doucha   mdoucha@suse.cz
SW Quality Engineer
SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
186 00 Prague 8
Czech Republic


More information about the ltp mailing list