[LTP] [PATCH 1/1] readahead02: Sleep 1.5 msec to fix problem on bare metal

Li Wang liwang@redhat.com
Mon Nov 24 13:58:44 CET 2025


On Fri, Nov 21, 2025 at 7:41 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > > Adding a short sleep is a good start. However I'm afraid that we will
> > > need a bit more complex solution to this problem. Maybe do a short
> > > sleep, check the cache size and if it increased more than some
> > > threshold, sleep again.
> >
> > > Something as:
> >
> > >     int retries = MAX_RETRIES;
> > >     unsigned long cached_prev, cached_cur = get_cached_size();
> >
> > >     do {
> > >             usleep(SHORT_SLEEP);
> >
> > >             cached_prev = cached_cur;
> > >             cached_cur = get_cached_size();
> >
> > >             if (cached_cur < cached_prev)
> > >                     break;
> >
> > >             if (cached_cur-cached_prev < CACHE_INC_THRESHOLD)
> > >                     break;
> >
> > >     } while (retries-- > 0);
> >
> > Yeah, few loops with shorter usleep() and proactive checking is for sure
> way
> > better than single usleep(). Will you please have time to send the above
> as a
> > patch? I'll test it for you.
>
> The hard part is tuning the constants right.
>
> If we assume that on the slowest low end device we would get around
> 5MB/s (that's how slow SD card in RPi can apparently be
> https://elinux.org/RPi_SD_cards#SD_card_performance)
> If we allow this to be a bit less precise we can assume that the speed is
> 5 bytes per 1 us (since USEC_PER_SEC / BYTES_IN_MB is roughtly 1).
>
> From that the number of retries should be the readahead_size /
> (5*SHORT_SLEEP)
> and I would put the short sleep somewhere around the
> a few miliseconds range, that would mean that the number of retries
> would end up between thousand and hundred when readahead_size is in
> megabytes. This also means that we can assume that the minimal size to
> be read in one loop is 5 * SLEEP_SIZE bytes. However with SLEEP_TIME in
> a few milisecond range the minimal number of bytes is in the range of a
> few pages so I guess that we can settle for running the loop for as long
> as the cache increases.
>
> So I suppose that we want something as:
>
> diff --git a/testcases/kernel/syscalls/readahead/readahead02.c
> b/testcases/kernel/syscalls/readahead/readahead02.c
> index f007db187..a2118c5ab 100644
> --- a/testcases/kernel/syscalls/readahead/readahead02.c
> +++ b/testcases/kernel/syscalls/readahead/readahead02.c
> @@ -39,6 +39,7 @@ static char testfile[PATH_MAX] = "testfile";
>  #define MEMINFO_FNAME "/proc/meminfo"
>  #define PROC_IO_FNAME "/proc/self/io"
>  #define DEFAULT_FILESIZE (64 * 1024 * 1024)
> +#define SHORT_SLEEP_US 5000
>
>  static size_t testfile_size = DEFAULT_FILESIZE;
>  static char *opt_fsizestr;
> @@ -173,6 +174,38 @@ static int read_testfile(struct tcase *tc, int
> do_readahead,
>
>                         i++;
>                         offset += readahead_length;
> +
> +                       /*
> +                        * We assume that the worst case I/O speed is
> around
> +                        * 5MB/s which is roughly 5 bytes per 1 us, which
> gives
> +                        * us upper bound for retries that is
> readahead_size/(5
> +                        * SHORT_SLEEP_US).
> +                        *
> +                        * We also monitor the cache size increases before
> and
> +                        * after the sleep. With the same assumption about
> the
> +                        * speed we are supposed to read at least 5 *
> SHORT_SLEEP_US
> +                        * during that time. That amound is genreally
> quite close
> +                        * a page size so that we just assume
> +                        *
> +                        * Of course all of this is inprecise on
> multitasking
> +                        * OS however even on a system where there are
> several
> +                        * processes figthing for I/O this loop will wait
> as
> +                        * long a cache is increasing which will gives us
> high
> +                        * chance of waiting for the readahead to happen.
> +                        */
> +                       int retries = readahead_size / (5 *
> SHORT_SLEEP_US);
> +                       unsigned long cached_prev, cached_cur =
> get_cached_size();
> +
> +                       do {
> +                               usleep(SHORT_SLEEP_US);
> +
> +                               cached_prev = cached_cur;
> +                               cached_cur = get_cached_size();
> +
> +                               if (cached_cur <= cached_prev)
> +                                       break;
> +                       } while (retries-- > 0);
> +
>                 } while ((size_t)offset < fsize);
>                 tst_res(TINFO, "readahead calls made: %zu", i);
>                 *cached = get_cached_size();
>
>
> Li, Jan what do you think?
>


This is a nice improvement, but one thing comes to my mind that
get_cached_size() reads the system wide “Cached” size from
'/proc/meminfo' might not be reliable in the test (probbaly impact
from other progress).

So, how about using mincore() works on the currently mapped pages
to count the resident bytes in memory?


-- 
Regards,
Li Wang


More information about the ltp mailing list