[LTP] [PATCH v2 3/3] Hugetlb: Migrating libhugetlbfs corrupt-by-cow-opt

Richard Palethorpe rpalethorpe@suse.de
Thu Oct 27 11:18:21 CEST 2022


Hello,

Tarun Sahu <tsahu@linux.ibm.com> writes:

> On Tue, 2022-10-25 at 12:04 +0100, Richard Palethorpe wrote:
>> Hello,
>> 
>> Tarun Sahu <tsahu@linux.ibm.com> writes:
>> 
>> > Migrating the libhugetlbfs/testcases/corrupt-by-cow-opt.c test
>> > 
>> > Test Description: Test sanity of cow optimization on page cache. If
>> > a page
>> > in page cache has only 1 ref count, it is mapped for a private
>> > mapping
>> > directly and is overwritten freely, so next time we access the
>> > page, we
>> > can see corrupt data.
>> 
>> Seems like this and 2/3 follow the same pattern. The setups are
>> reasonably similar and could be encapsulated in tst_hugepage.
> Do you mean by a encapsulating in a function. and call it from setup.
> becuase it will anyway require explicit cleanup.
>
> Or by defining a new field in struct tst_hugepage, if that is true,
> that setup will automatically be done in do_setup in tst_test file.
> which means it will require change in tst_test.c too. also change in 
> do_cleanup in tst_test.c will also be required.

Yes, it's a very common pattern, so will probably save a lot of
boilerplate.

>
>> 
>> > +
>> > +static struct tst_test test = {
>> > +	.needs_root = 1,
>> > +	.needs_tmpdir = 1,
>> > +	.options = (struct tst_option[]) {
>> > +		{"H:", &Hopt,   "Location of hugetlbfs, i.e.  -H
>> > /var/hugetlbfs"},
>> > +		{"s:", &nr_opt, "Set the number of the been allocated
>> > hugepages"},
>> 
>> nr_opt also seems suspicious. The test only ever allocates one page
>> at a
>> time regardless of what this is set to. Changing it will just change
>> how
>> much free memory we check for unless I am mistaken.
> yes, Will update it and also will check for other test cases if not
> required. 
>
>> 
>> > +		{}
>> > +	},
>> > +	.setup = setup,
>> > +	.cleanup = cleanup,
>> > +	.test_all = run_test,
>> > +	.hugepages = {2, TST_NEEDS},
>> > +};
>> > -- 
>> > 2.31.1
>> 
>> 


-- 
Thank you,
Richard.


More information about the ltp mailing list