[LTP] [PATCH v2] madvise06: shrink to 3 MADV_WILLNEED pages to stabilize the test
Richard Palethorpe
rpalethorpe@suse.de
Tue Jun 21 10:27:56 CEST 2022
Hello Li,
Li Wang <liwang@redhat.com> writes:
> Paul Bunyan reports that the madvise06 test fails intermittently with many
> LTS kernels, after checking with mm developer we prefer to think this is
> more like a test issue (but not kernel bug):
>
> madvise06.c:231: TFAIL: 4 pages were faulted out of 2 max
>
> So this improvement is target to reduce the false positive happens from
> three points:
>
> 1. Adding the while-loop to give more chances for madvise_willneed()
> reads memory asynchronously
> 2. Raise value of `loop` to let test waiting for more times if swapchache
> haven't reached the expected
> 3. Shrink to only 3 pages for verifying MADV_WILLNEED that to make the
> system easily takes effect on it
>
> From Rafael Aquini:
>
> The problem here is that MADV_WILLNEED is an asynchronous non-blocking
> hint, which will tell the kernel to start doing read-ahead work for the
> hinted memory chunk, but will not wait up for the read-ahead to finish.
> So, it is possible that when the dirty_pages() call start re-dirtying
> the pages in that target area, is racing against a scheduled swap-in
> read-ahead that hasn't yet finished. Expecting faulting only 2 pages
> out of 102400 also seems too strict for a PASS threshold.
>
> Note:
> As Rafael suggested, another possible approach to tackle this failure
> is to tally up, and loosen the threshold to more than 2 major faults
> after a call to madvise() with MADV_WILLNEED.
> But from my test, seems the faulted-out page shows a significant
> variance in different platforms, so I didn't take this way.
>
> Btw, this patch get passed on my two easy reproducible systems more than 1000 times
>
> Reported-by: Paul Bunyan <pbunyan@redhat.com>
> Signed-off-by: Li Wang <liwang@redhat.com>
> Cc: Rafael Aquini <aquini@redhat.com>
> Cc: Richard Palethorpe <rpalethorpe@suse.com>
Reviewed-by: Richard Palethorpe <rpalethorpe@suse.com>
--
Thank you,
Richard.
More information about the ltp
mailing list