[LTP] [QUESTION] ltp: mavise06 failed when the task scheduled to another cpu

Li Wang liwang@redhat.com
Sat Feb 18 06:49:41 CET 2023


Hi Yongqiang,

Sorry for the late reply, I missed your email because of the filter.
Next time, plz remember to CC the LTP mailing list: ltp@lists.linux.it

We ever submitted a patch for reducing this happening in:

https://github.com/linux-test-project/ltp/commit/00e769e63515e51ee1020314efcf4fe880c46d7c
And from our team testing, there do not be similar failures happening
anymore since then.

-----------------------

BTW, recently we catch another issue:
      43 madvise06.c:201: TFAIL: less than 102400 Kb were moved to the swap
cache

And I started an RFC patch here:
    https://lists.linux.it/pipermail/ltp/2023-February/032945.html

<https://lists.linux.it/pipermail/ltp/2023-February/032945.html>

On Mon, Oct 11, 2021 at 4:14 PM Yongqiang Liu <liuyongqiang13@huawei.com>
wrote:

> Hi,
>
> when runing this case in 5.10-lts kernel, it will trigger the folloing
> failure:
>
>   ......
>
>      madvise06.c:74: TINFO:  memory.kmem.usage_in_bytes: 1752 Kb
>      madvise06.c:208: TPASS: more than 102400 Kb were moved to the swap
> cache
>      madvise06.c:217: TINFO: PageFault(madvice / no mem access): 102401
>      madvise06.c:221: TINFO: PageFault(madvice / mem access): 102417
>      madvise06.c:82: TINFO: After page access
>      madvise06.c:84: TINFO:  Swap: 307372 Kb
>      madvise06.c:86: TINFO:  SwapCached: 101820 Kb
>      madvise06.c:88: TINFO:  Cached: 103004Kb
>      madvise06.c:74: TINFO:  memory.kmem.usage_in_bytes: 0Kb
>      madvise06.c:225: TFAIL: 16 pages were faulted out of 2 max
>
> and we found that when we call the madvise the task was scheduled to
> another cpu:
>
> ......
>
> tst_res(TINFO, "before madvise MEMLIMIT CPU:%d", sched_getcpu());--->cpu0
>
> TEST(madvise(target, MEM_LIMIT, MADV_WILLNEED));
>
> tst_res(TINFO, "after madvise MEMLIMIT CPU:%d", sched_getcpu());--->cpu1
>
> ......
>
> tst_res(TINFO, "before madvise PASS_THRESHOLDCPU:%d",
> sched_getcpu());-->cpu1
>
> TEST(madvise(target, PASS_THRESHOLD, MADV_WILLNEED));
>
> tst_res(TINFO, "after madvise PASS_THRESHOLDCPU:%d",
> sched_getcpu());-->cpu0
>
> .....
>
> Is the PERCPU data swap_slot was not handled well?
>
>
> with the following patch almost fix the error:
>
> e9b9734b7465 sched/fair: Reduce cases for active balance
>
> 8a41dfcda7a3 sched/fair: Don't set LBF_ALL_PINNED unnecessarily
>
> fc488ffd4297 sched/fair: Skip idle cfs_rq
>
> but bind the task to a cpu also can solve this problem.
>
> Kind regards,
>
>
>

-- 
Regards,
Li Wang


More information about the ltp mailing list