[LTP] [PATCH v2 1/2] lib: add support for kinds of hpsize reservation
Li Wang
liwang@redhat.com
Thu Oct 19 10:22:20 CEST 2023
On Wed, Oct 18, 2023 at 6:09 PM Richard Palethorpe <rpalethorpe@suse.de>
wrote:
> Hello,
>
> Li Wang <liwang@redhat.com> writes:
>
> > Hi Cyril,
> >
> > [Please hold off on merging this patch]
> >
> > The hesitating part of this method (from myself) is the new field
> > 'hp->hpsize'.
> > It seems not wise to leave it to users to fill the gigantic page size
> > manually,
> > as some arches support different huge/gigantic page sizes:
>
> Yes, good idea.
>
> >
> > x86_64 and x86: 2MB and 1GB.
> > PowerPC: ranging from 64KB to 16GB.
> > ARM64: 2MB and 1GB.
> > IA-64 (Itanium): from 4KB to 256MB.
> >
> > we probably need a intelengent way to detect and reserve whatever
> > hugepage or gitantic-page that all acmplish that in ltp-library or
> setup().
> > Then people don't need to fill any byte which avoiding typo or
> > something wrong.
>
> It seems like a special flag is needed in mmap if you want to allocate a
> gigantic page other than 1GB?
>
Right, MAP_HUGE_2MB or MAP_HUGE_1GB used in conjunction
with MAP_HUGETLB to select alternative hugetlb page sizes.
But no matter which one to use, we should reserve enough
memory size for them in "HugePages_Free:". That's what
the ".hugepages" are doing in LTP library.
>
> >
> > What I can think of the improved way is to extend the hugepage policy
> > with "_G" subfix to specified the gigantic pages.
> >
> > Is this sounds better? What do you think?
> >
> > Something drafted base on my patch V2:
> >
> > --- a/include/tst_hugepage.h
> > +++ b/include/tst_hugepage.h
> > @@ -20,14 +20,15 @@ extern char *nr_opt; /* -s num Set the number of
> the
> > been allocated hugepages
> > extern char *Hopt; /* -H /.. Location of hugetlbfs, i.e. -H
> > /var/hugetlbfs */
> >
> > enum tst_hp_policy {
> > - TST_REQUEST,
> > - TST_NEEDS,
> > + TST_REQUEST_H = 0x0,
> > + TST_REQUEST_G = 0x1,
> > + TST_NEEDS_H = 0x2,
> > + TST_NEEDS_G = 0x4,
> > };
> >
> > struct tst_hugepage {
> > const unsigned long number;
> > enum tst_hp_policy policy;
> > - const unsigned long hpsize;
> > };
>
> Why not keep hpsize and add enum tst_hp_size { TST_HUGE, TST_GIGANTIC }?
>
Good advice!
I think maybe the 'enum tst_hp_type' is better? This sounds like the
proper type of hugepages we choose for the test.
enum tst_hp_policy {
TST_REQUEST,
TST_NEEDS,
};
enum tst_hp_type {
TST_HUGEPAGE,
TST_GIGANTIC,
};
struct tst_hugepage {
const unsigned long number;
enum tst_hp_policy policy;
enum tst_hp_type hptype;
};
...
static struct tst_test test = {
.needs_root = 1,
...
.hugepages = {2, TST_NEEDS, TST_GIGANTIC},
};
>
> In theory more sizes can be added.
>
> >
> > /*
> > @@ -35,6 +36,11 @@ struct tst_hugepage {
> > */
> > size_t tst_get_hugepage_size(void);
> >
> > +/*
> > + * Get the gigantic hugepage size. Returns 0 if hugepages are not
> > supported.
> > + */
> > +size_t tst_get_gigantic_size(void);
> > +
> > /*
> > * Try the best to request a specified number of huge pages from system,
> > * it will store the reserved hpage number in tst_hugepages.
> > diff --git a/lib/tst_hugepage.c b/lib/tst_hugepage.c
> > index f4b18bbbf..568884fbb 100644
> > --- a/lib/tst_hugepage.c
> > +++ b/lib/tst_hugepage.c
> > @@ -21,6 +21,30 @@ size_t tst_get_hugepage_size(void)
> > return SAFE_READ_MEMINFO("Hugepagesize:") * 1024;
> > }
> >
> > +/* Check if hugetlb page is gigantic */
> > +static inline int is_hugetlb_gigantic(unsigned long hpage_size)
> > +{
> > + return (hpage_size / getpagesize()) >> 11;
> > +}
>
> What is 11? If it is the order or shift of hugepages then that is not
> constant (see below).
>
This function is extracted from mem/hugetlb/lib/hugetlb.h.
I think that 11 means, the gigantic-pgsize is not less than 2048 general
pgsize.
But you're right it can't cover all situations.
>
> > +
> > +size_t tst_get_gigantic_size(void)
> > +{
> > + DIR *dir;
> > + struct dirent *ent;
> > + unsigned long g_hpage_size;
> > +
> > + dir = SAFE_OPENDIR(PATH_HUGEPAGES);
> > + while ((ent = SAFE_READDIR(dir))) {
> > + if ((sscanf(ent->d_name, "hugepages-%lukB",
> &g_hpage_size)
> > == 1) &&
> > + is_hugetlb_gigantic(g_hpage_size * 1024)) {
> > + break;
> > + }
> > + }
>
> I guess in theory more gigantic page sizes could be added. I'm not sure
> what size we should pick, but we don't want it to be random because it
> would make debugging more difficult.
>
> So could we search for the smallest size (hugepagesize) and second
> smallest (smallest gigantic page)?
>
Yes, we could have a try on this way. To intelligently judge and pick the
proper size for gigantic or huge page, which should work for most
mainstream platforms(and TCONF on no-supported systems).
>
> > +
> > + SAFE_CLOSEDIR(dir);
> > + return g_hpage_size * 1024;
> > +}
> > +
> > unsigned long tst_reserve_hugepages(struct tst_hugepage *hp)
> > {
> > unsigned long val, max_hpages, hpsize;
> > @@ -43,10 +67,10 @@ unsigned long tst_reserve_hugepages(struct
> tst_hugepage
> > *hp)
> > else
> > tst_hugepages = hp->number;
> >
> > - if (hp->hpsize)
> > - hpsize = hp->hpsize;
> > + if (hp->policy & (TST_NEEDS_G | TST_REQUEST_G))
> > + hpsize = tst_get_gigantic_size() / 1024;
> > else
> > - hpsize = SAFE_READ_MEMINFO(MEMINFO_HPAGE_SIZE);
> > + hpsize = tst_get_hugepage_size() / 1024;
> >
> > sprintf(hugepage_path,
> > PATH_HUGEPAGES"/hugepages-%lukB/nr_hugepages", hpsize);
> > if (access(hugepage_path, F_OK)) {
> >
> >
> >
> >
> > --
> > Regards,
> > Li Wang
>
>
> --
> Thank you,
> Richard.
>
>
--
Regards,
Li Wang
More information about the ltp
mailing list