[LTP] hugemmap24 failure on aarch64 with 512MB hugepages

Li Wang liwang@redhat.com
Thu Mar 9 15:01:02 CET 2023


On Thu, Mar 9, 2023 at 6:01 PM Jan Stancek <jstancek@redhat.com> wrote:

> On Thu, Mar 9, 2023 at 9:29 AM Li Wang <liwang@redhat.com> wrote:
> >
> > [Cc'ing Jan Stancek]
> >
> > On Wed, Mar 8, 2023 at 5:51 PM Cyril Hrubis <chrubis@suse.cz> wrote:
> >>
> >> Hi!
> >> Looks like the hugemmap24 test fails on aarch64 with 512MB hugepages
> >> since it attempts to MAP_FIXED at NULL address, any idea why aarch64 is
> >> limited to 0x10000000 as slice boundary?
> >
> >
> > It looks like a generic/random slice_boundary that tries as a
> > basic gap between two available free neighbor slices.
> >
> >
> https://github.com/libhugetlbfs/libhugetlbfs/commit/8ee2462f3f6eea72067641a197214610443576b8
> >
> https://github.com/libhugetlbfs/libhugetlbfs/commit/399cda578564bcd52553ab88827a82481b4034d1
> >
> > I guess it doesn't matter just to increase the size of the boundary.
> > or, we can skip testing on a big page-size system like aarch64(with
> 512MB)
> > if unable to find the free slices.
> >
> > Test passed from my side with patch:
> >
> > --- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24.c
> > +++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24.c
> > @@ -37,7 +37,7 @@ static int init_slice_boundary(int fd)
> >  #else
> >         /* powerpc: 256MB slices up to 4GB */
> >         slice_boundary = 0x00000000;
> > -       slice_size = 0x10000000;
> > +       slice_size = 0x100000000;
>
> This would likely negatively impact 32-bit, as it makes slice size 4GB.
>
> With so large hugepages it underflows mmap address, so I'd increase it,
> until
> we start with one larger than zero:
>
> diff --git a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24.c
> b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24.c
> index a465aad..9523067 100644
> --- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24.c
> +++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24.c
> @@ -23,7 +23,7 @@
>
>  static int  fd = -1;
>  static unsigned long slice_boundary;
> -static long hpage_size, page_size;
> +static unsigned long hpage_size, page_size;
>
>  static int init_slice_boundary(int fd)
>  {
> @@ -40,6 +40,10 @@ static int init_slice_boundary(int fd)
>         slice_size = 0x10000000;
>  #endif
>
> +       /* avoid underflow on systems with large huge pages */
> +       while (slice_boundary + slice_size < 2 * hpage_size)
> +               slice_boundary += slice_size;
> +
>         /* dummy malloc so we know where is heap */
>         heap = malloc(1);
>         free(heap);
>
>
> Another issue however is the use of MAP_FIXED, which can stomp over
> existing mappings:
>

If we make the slice_size larger than 2*hpage_size, this situation
will be avoided I guess. Because the gap between the two neighbor
slices guarantees there is no chance to overlap.

I will look into this tomorrow, but feel free to take that machine (shared
with you) a try.
(good night:)



>
> [pid 48607] 04:50:51 mmap(0x20000000, 2147483648, PROT_READ,
> MAP_SHARED|MAP_FIXED, 3, 0) = 0x20000000
> [pid 48607] 04:50:51 munmap(0x20000000, 2147483648) = 0
>
> test may PASS, but at the end you get:
>
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  numa_bitmask_free (bmp=0x39ae09a0) at libnuma.c:228
> 228             free(bmp->maskp);
> (gdb) bt
> #0  numa_bitmask_free (bmp=0x39ae09a0) at libnuma.c:228
> #1  numa_bitmask_free (bmp=0x39ae09a0) at libnuma.c:224
> #2  0x0000ffff89263360 in numa_fini () at libnuma.c:114
> #3  0x0000ffff892d4cac in _dl_fini () at dl-fini.c:141
> #4  0x0000ffff890d899c in __run_exit_handlers (status=status@entry=0,
> listp=0xffff892105e0 <__exit_funcs>,
> run_list_atexit=run_list_atexit@entry=true,
> run_dtors=run_dtors@entry=true) at exit.c:108
> #5  0x0000ffff890d8b1c in __GI_exit (status=status@entry=0) at exit.c:139
> #6  0x000000000040ecd8 in testrun () at tst_test.c:1468
> #7  fork_testrun () at tst_test.c:1592
> #8  0x0000000000410728 in tst_run_tcases (argc=<optimized out>,
> argv=<optimized out>, self=self@entry=0x440650 <test>) at
> tst_test.c:1686
> #9  0x0000000000403ef8 in main (argc=<optimized out>, argv=<optimized
> out>) at ../../../../../include/tst_test.h:394
>
> (gdb) info proc map
> Mapped address spaces:
>
>           Start Addr           End Addr       Size     Offset objfile
>             0x400000           0x430000    0x30000        0x0
> /root/ltp.upstream/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24
>             0x430000           0x440000    0x10000    0x20000
> /root/ltp.upstream/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24
>             0x440000           0x450000    0x10000    0x30000
> /root/ltp.upstream/testcases/kernel/mem/hugetlb/hugemmap/hugemmap24
>       0xffff890a0000     0xffff89200000   0x160000        0x0
> /usr/lib64/libc-2.28.so
>       0xffff89200000     0xffff89210000    0x10000   0x150000
> /usr/lib64/libc-2.28.so
>       0xffff89210000     0xffff89220000    0x10000   0x160000
> /usr/lib64/libc-2.28.so
>       0xffff89220000     0xffff89240000    0x20000        0x0
> /usr/lib64/libpthread-2.28.so
>       0xffff89240000     0xffff89250000    0x10000    0x10000
> /usr/lib64/libpthread-2.28.so
>       0xffff89250000     0xffff89260000    0x10000    0x20000
> /usr/lib64/libpthread-2.28.so
>       0xffff89260000     0xffff89270000    0x10000        0x0
> /usr/lib64/libnuma.so.1.0.0
>       0xffff89270000     0xffff89280000    0x10000        0x0
> /usr/lib64/libnuma.so.1.0.0
>       0xffff89280000     0xffff89290000    0x10000    0x10000
> /usr/lib64/libnuma.so.1.0.0
>       0xffff89290000     0xffff892a0000    0x10000        0x0
> /dev/shm/ltp_hugemmap24_48644 (deleted)
>       0xffff892d0000     0xffff89300000    0x30000        0x0
> /usr/lib64/ld-2.28.so
>       0xffff89300000     0xffff89310000    0x10000    0x20000
> /usr/lib64/ld-2.28.so
>       0xffff89310000     0xffff89320000    0x10000    0x30000
> /usr/lib64/ld-2.28.so
>
>

-- 
Regards,
Li Wang


More information about the ltp mailing list