[LTP] Question on hugemmap34
Wei Gao
wegao@suse.com
Tue Dec 10 12:53:26 CET 2024
Hi ALL
Is there any special config needed for this test case? Since the test failed with the following output on my test setup(opensuse15.5 with 6.12 kernel):
tst_hugepage.c:84: TINFO: 1 hugepage(s) reserved
tst_tmpdir.c:317: TINFO: Using /tmp/LTP_hugLSJb7r as tmpdir (btrfs filesystem)
tst_test.c:1100: TINFO: Mounting none to /tmp/LTP_hugLSJb7r/hugetlbfs fstyp=hugetlbfs flags=0
tst_test.c:1890: TINFO: LTP version: 20240930
tst_test.c:1894: TINFO: Tested kernel: 6.12.3-lp155.11.g78b0030-vanilla #1 SMP Fri Dec 6 08:56:39 UTC 2024 (78b0030) ppc64le
tst_test.c:1727: TINFO: Timeout per run is 0h 00m 30s
tst_coredump.c:32: TINFO: Avoid dumping corefile for process(pid=6671)
hugemmap34.c:88: TBROK: waitpid(0,0x7fffd8baa220,0) failed: ECHILD (10)
hugemmap34.c:92: TFAIL: Child: exited with 2
The root cause is the mmap call encounter failure with EBUSY.
LTP mmap call
https://github.com/linux-test-project/ltp/blob/7bb960cc4f736d8860b6b266119e71e761e22b32/testcases/kernel/mem/hugetlb/hugemmap/hugemmap34.c#L71
hit kernel code
https://elixir.bootlin.com/linux/v6.12/source/arch/powerpc/mm/book3s64/slice.c#L568
Let me give an example to explain why this happens base pmap of process in my test system:
Address Kbytes RSS PSS Dirty Swap Mode Mapping
0000000010000000 256 256 128 256 0 r-xp- /root/ltp/testcases/kernel/mem/hugetlb/hugemmap/hugemmap34
0000000010040000 64 64 32 64 0 r--p- /root/ltp/testcases/kernel/mem/hugetlb/hugemmap/hugemmap34
0000000010050000 64 64 64 64 0 rw-p- /root/ltp/testcases/kernel/mem/hugetlb/hugemmap/hugemmap34
0000000010060000 64 64 64 64 0 rw-p- [ anon ]
0000010010090000 192 64 64 64 0 rw-p- [ anon ]
00007fff8f3b0000 2368 1408 0 0 0 r-xp- /lib64/libc.so.6
00007fff8f600000 64 64 32 64 0 r--p- /lib64/libc.so.6
00007fff8f610000 64 64 64 64 0 rw-p- /lib64/libc.so.6
00007fff8f620000 64 64 32 64 0 rw-s- /dev/shm/ltp_hugemmap34_15513 (deleted)
00007fff8f630000 128 0 0 0 0 r--p- [ anon ]
00007fff8f650000 64 0 0 0 0 r-xp- [ anon ]
00007fff8f660000 320 128 0 0 0 r-xp- /lib64/ld64.so.2
00007fff8f6b0000 64 64 32 64 0 r--p- /lib64/ld64.so.2
00007fff8f6c0000 64 64 64 64 0 rw-p- /lib64/ld64.so.2
00007fffc6740000 192 64 64 64 0 rw-p- [ stack ] <<<<
---------------- ------- ------- ------- ------- -------
total kB 20416 2432 1318 896 0
Test power system config:
#getconf PAGE_SIZE
65536
#grep Hugepagesize /proc/meminfo
Hugepagesize: 16384 kB
Kenel split VM space into:
16 low_slice(64KB page size), each slice size is 256MB
4096 high_slice(64K page size), each slice size is 1TB
00007fffc6740000(stack) is belong 127th high_slice(range is 00007f0000000000 - 00007fffffffffff)
When mmap try to allocate a 16M space(with page size MMU_PAGE_16M) near stack address(00007fffc6740000),
kernel will first check good_mask but failed, since the all slices is MMU_PAGE_64K
but mmap request page size is MMU_PAGE_16M.
https://elixir.bootlin.com/linux/v6.12/source/arch/powerpc/mm/book3s64/slice.c#L531
Next kernel start check potential_mask(seach all slice which has no any VM mapped), obviously
the 127th slice is occupied and 126th is the good candidate one but address range not match
the mmap's request address(0x00007fffc6740000 - 2 * hpage_size), so finally EBUSY returned.
https://elixir.bootlin.com/linux/v6.12/source/arch/powerpc/mm/book3s64/slice.c#L559
BTW: I have tested a scenario which disables MAP_FIXED_NOREPLACE, the kernel will
allocate successfully within the range of 126th high slice.
Thanks.
Regards
Wei Gao
More information about the ltp
mailing list