[LTP] [RFC PATCH] starvation: set a baseline for maximum runtime

Cyril Hrubis chrubis@suse.cz
Tue Nov 26 11:28:05 CET 2024


Hi!
> The commit ec14f4572 ("sched: starvation: Autocallibrate the timeout")
> introduced a runtime calibration mechanism to dynamically adjust test
> timeouts based on CPU speed.
> 
> While this works well for slower systems like microcontrollers or ARM
> boards, it struggles to determine appropriate runtimes for modern CPUs,
> especially when debugging kernels with significant overhead.

Wouldn't it be better to either skip the test on kernels with debuging
confing options on? Or multiply the timeout we got from the callibration
when we detect a debugging kernel?

The problem is that any number we put there will not be correct in a few
years as CPU and RAM speed increase and the test will be effectively
doing nothing because the default we put there will cover kernels that
are overly slow on a future hardware.

-- 
Cyril Hrubis
chrubis@suse.cz


More information about the ltp mailing list