[LTP] [RFC PATCH] fallocate05: increase the fallocate and defallocate size

Li Wang liwang@redhat.com
Sun Sep 26 09:39:35 CEST 2021


On Sat, Sep 25, 2021 at 2:25 AM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > >That's weird.
> > >
> > >What about the testing result with unlimit the tmpfs size?
> >
> > With the .dev_min_size field set to zero, it still causes OOM. Looking
> > at the test output, it seems to use 256MB in this case:
>

With .dev_min_size==0 the test lib will choose default size 256MB for
instead.

However, unlimit tmpfs-size does not mean set .dev_min_size to zero.
It should be returned mnt_data directly in limit_tmpfs_mount_size.
That also does the 20210524 version.

e.g.

--- a/lib/tst_test.c
+++ b/lib/tst_test.c
@@ -892,6 +892,8 @@ static void prepare_and_mount_dev_fs(const char
*mntpoint)
 static const char *limit_tmpfs_mount_size(const char *mnt_data,
                char *buf, size_t buf_size, const char *fs_type)
 {
+       return mnt_data;
+
        if (strcmp(fs_type, "tmpfs"))
                return mnt_data;



> >
> > tst_test.c:1421: TINFO: Testing on tmpfs
> > tst_test.c:922: TINFO: Skipping mkfs for TMPFS filesystem
> > tst_test.c:903: TINFO: Limiting tmpfs size to 256MB
> > tst_test.c:1353: TINFO: Timeout per run is 0h 15m 00s
> > tst_fill_fs.c:32: TINFO: Creating file mntpoint/file0 size 21710183
> > tst_fill_fs.c:32: TINFO: Creating file mntpoint/file1 size 8070086
> > tst_fill_fs.c:32: TINFO: Creating file mntpoint/file2 size 3971177
> > tst_fill_fs.c:32: TINFO: Creating file mntpoint/file3 size 36915315
> > tst_fill_fs.c:32: TINFO: Creating file mntpoint/file4 size 70310993
> > tst_fill_fs.c:32: TINFO: Creating file mntpoint/file5 size 4807935
> > tst_fill_fs.c:32: TINFO: Creating file mntpoint/file6 size 90739786
> > tcf-agent invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE),
> order=0, oom_score_adj=0
> > [...]
> > Mem-Info:
> > active_anon:229 inactive_anon:44809 isolated_anon:0
> >   active_file:7 inactive_file:4 isolated_file:0
> >   unevictable:0 dirty:0 writeback:0
> >   slab_reclaimable:1205 slab_unreclaimable:3757
> >   mapped:334 shmem:42064 pagetables:226 bounce:0
> >   free:1004 free_pcp:0 free_cma:0
> > Node 0 active_anon:916kB inactive_anon:179236kB active_file:28kB
> inactive_file:88kB unevictable:0kB isolated(anon):0kB isolated(file) :0kB
> mapped:1336kB dirty:0kB writeback:0kB shmem:168256kB writeback_tmp:0kB
> kernel_stack:1016kB all_unreclaimable? no
> > Normal free:3776kB min:1872kB low:2340kB high:2808kB
> > reserved_highatomic:0KB active_anon:916kB inactive_anon:179236kB
> active_file:28k B inactive_file:16kB unevictable:0kB writepending:0kB
> present:262144kB managed:220688kB mlocked:0kB pagetables:904kB bounce:0kB
> free_pcp:224kB local_pcp:0kB free_cma:0kB
> > lowmem_reserve[]: 0 0 0
> > Normal: 1*4kB (M) 1*8kB (M) 22*16kB (U) 35*32kB (UE) 16*64kB (UE)
> 9*128kB (UE) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0* 16384kB
> = 3660kB
> > 42138 total pagecache pages
>
> That is strange, for me the tmpfs starts to return ENOSPC when the
> system is getting low on memory.
>

Maybe he enabled some OOM kernel options you didn't.
Or, some other configuration we don't know.


-- 
Regards,
Li Wang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linux.it/pipermail/ltp/attachments/20210926/4cc71581/attachment.htm>


More information about the ltp mailing list