[LTP] [PATCH] madvise06: wait a bit after madvise() call

Chunyu Hu chuhu@redhat.com
Fri Jul 22 12:49:51 CEST 2016



----- Original Message -----
> From: "Jan Stancek" <jstancek@redhat.com>
> To: "Li Wang" <liwang@redhat.com>, "Chunyu Hu" <chuhu@redhat.com>
> Cc: ltp@lists.linux.it
> Sent: Thursday, July 21, 2016 10:23:27 PM
> Subject: Re: [LTP] [PATCH] madvise06: wait a bit after madvise() call
> 
> On 07/21/2016 01:02 PM, Li Wang wrote:
> > On Thu, Jul 21, 2016 at 06:31:58AM -0400, Chunyu Hu wrote:
> >>>
> >>> If you still have the setup, can you try how reliable is this approach?
> >>
> >> I also had a try on my desktop. I copied the file as a.c and compiled it
> >> in ltp.
> >> Result is that if the sys is fresh with low Cache, it can pass rightly.
> >> But if
> >> the Cache is before exhausted, it can hit failure, as the thresh_hold is
> >> too
> >> large to get there. Just FYI.
> 
> I'm not sure I follow here, your /proc/meminfo shows:
> Cached:           260124 kB
> SwapCached:        38096 kB
> 
> That doesn't seem very high to me.

Sorry. This info is just for showing the system info. I didn't save the info at the beginning,
this is the info after a reboot. 

The other case that reproduced the false positive issue is when another WILL_NEED process swapping
a large mem(4G) at the same.



> > 
> > Yes, Chunyu ran failed the case with his destop(uptime more than 30days) at
> > first,
> > after rebooting it could be PASS.
> 
> I'm starting to run out of ideas how we can test this somewhat reliably.
> 
> Attached is approach v3, which sets up memory cgroup:
> - memory.limit_in_bytes is 128M
> - we allocate 512M
> - as consequence ~384M should be swapped while system should still have
>   plenty of free memory, which should be available for cache
> 
> Regards,
> Jan
> 
> 

-- 
Regards,
Chunyu Hu



More information about the ltp mailing list