[LTP] [RFC PATCH] starvation: set a baseline for maximum runtime

Li Wang liwang@redhat.com
Wed Nov 27 11:08:42 CET 2024


On Wed, Nov 27, 2024 at 5:46 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > I have carefully compared the differences between the general
> > kernel config-file and the debug kernel config-file.
> >
> > Below are some configurations that are only enabled in the debug
> > kernel and may cause kernel performance degradation.
> >
> > The rough thoughts I have is to create a SET for those configurations,
> > If the SUT kernel maps some of them, we reset the timeout using the
> > value multiplier obtained from calibration.
> >
> > e.g. if mapped N number of the configs we use (timeout * N) as the
> > max_runtime.
> >
> > Or next, we extract this method to the whole LTP timeout setting if
> > possible?
>
> That actually sounds good to me, if we detect certain kernel options
> that are know to slow down the process execution it makes a good sense
> to multiply the timeouts for all tests directly in the test library.
>

Thanks.

After thinking it over, I guess we'd better _only_ apply this method
to some special slow tests (aka. more easily timeout tests). If we do
the examination of those kernel options in the library for all, that
maybe a burden to most quick tests, which always finish in a few
seconds (far less than the default 30s).

Therefore, I came up with a new option for .max_runtime, which is
TST_DYNAMICAL_RUNTIME. Similar to the TST_UNLIMITED_RUNTIME
we ever use. Test by adding this .max_runtime = TST_DYNAIMCAL_RUNTIME
that will try to find a proper timeout value in the running time for the
test.

See: https://lists.linux.it/pipermail/ltp/2024-November/040990.html

-- 
Regards,
Li Wang


More information about the ltp mailing list