[LTP] [PATCH v2] tst_test: Add min_runtime to control lower bound of scaled runtime
Li Wang
liwang@redhat.com
Tue Jun 24 14:45:35 CEST 2025
On Tue, Jun 24, 2025 at 8:36 PM Li Wang <liwang@redhat.com> wrote:
> Hi All,
>
> Li Wang <liwang@redhat.com> wrote:
>
>
>> But the default 1024 min_samples I still have no idea how long
>> the .min_runtime needs. Maybe we can estimate and set .min_runtime
>> on a slow system manually.
>>
>
> I choose to run the fuzzy_sync tests on a Cortex-A55 CPU with
> only one core assigned, using the taskset command to simulate
> an extreme single-core execution scenario.
>
> Tested on:
> Cortex-A55, 1.2GHz
> Linux 5.14, aarch64
>
> For example:
> time taskset -c 0 ./cve-2016-7117 (hacked the test only do sampling)
>
> This setup allows me to evaluate how long the sampling phase
> takes under constrained conditions.
>
> Based on the observed sampling duration, I apply the following policy
> for setting .min_runtime:
>
> 1. If the sampling time is very short (less than 1 second),
> I simply set .min_runtime = 2.
>
> 2. If the sampling time is longer than 2 seconds but still less than
> .runtime, I set .min_runtime to twice the sampling time, while
> keeping .runtime unchanged.
>
> 3. If the sampling time exceeds .runtime, I also double the sampling
> time for .min_runtime, but leave .runtime as is.
>
3. If the sampling time exceeds .runtime, I simply set .min_runtime
to the .runtime value and remove .runtime.
--
Regards,
Li Wang
More information about the ltp
mailing list