[LTP] [RFC PATCH] starvation: set a baseline for maximum runtime
Li Wang
liwang@redhat.com
Tue Nov 26 11:59:28 CET 2024
On Tue, Nov 26, 2024 at 6:28 PM Cyril Hrubis <chrubis@suse.cz> wrote:
> Hi!
> > The commit ec14f4572 ("sched: starvation: Autocallibrate the timeout")
> > introduced a runtime calibration mechanism to dynamically adjust test
> > timeouts based on CPU speed.
> >
> > While this works well for slower systems like microcontrollers or ARM
> > boards, it struggles to determine appropriate runtimes for modern CPUs,
> > especially when debugging kernels with significant overhead.
>
> Wouldn't it be better to either skip the test on kernels with debuging
> confing options on? Or multiply the timeout we got from the callibration
> when we detect a debugging kernel?
>
Well, we have not achieved a reliable way to detect debug kernels in LTP.
While I looking at our RHEL9 kernel config file. The general kernel also
enables things like "CONFIG_DEBUG_KERNEL=y".
# uname -r
5.14.0-533.el9.x86_64
# grep CONFIG_DEBUG_KERNEL /boot/config-5.14.0-533.el9.x86_64
CONFIG_DEBUG_KERNEL=y
> The problem is that any number we put there will not be correct in a few
> years as CPU and RAM speed increase and the test will be effectively
> doing nothing because the default we put there will cover kernels that
> are overly slow on a future hardware.
>
Sounds reasonable. The hardcode baseline time is not a wise method,
It is still possible not to satisfy some slower boards or new processors.
--
Regards,
Li Wang
More information about the ltp
mailing list