[LTP] [PATCH] Add goals of patch review and tips

Richard Palethorpe rpalethorpe@suse.de
Thu Mar 16 11:18:20 CET 2023


Hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> Hi!
>> I see two options for patch review. Either we have a single senior
>> maintainer who does most of or it is distributed.
>> 
>> For now I think it needs to be distributed which is beyond the scope
>> of this commit.
>> 
>> In order to distribute it we need new contributors to review each
>> others' work at least for the first few revisions.
>> 
>> I think that anyone can review a patch if they put the work in to test
>> it and try to break it. Then understand why it is broken.
>> 
>> This commit states some ideas about how to do that, plus some tips for
>> more advanced patch review.
>> 
>> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
>> Cc: Cyril Hrubis <chrubis@suse.cz>
>> Cc: Andrea Cervesato <andrea.cervesato@suse.de>
>> Cc: Avinesh Kumar <akumar@suse.de>
>> Cc: Wei Gao <wegao@suse.com>
>> Cc: Petr Vorel <pvorel@suse.cz>
>> ---
>>  doc/maintainer-patch-review-checklist.txt | 78 ++++++++++++++++++++++-
>>  1 file changed, 77 insertions(+), 1 deletion(-)
>> 
>> diff --git a/doc/maintainer-patch-review-checklist.txt b/doc/maintainer-patch-review-checklist.txt
>> index 706b0a516..be0cd0961 100644
>> --- a/doc/maintainer-patch-review-checklist.txt
>> +++ b/doc/maintainer-patch-review-checklist.txt
>> @@ -1,4 +1,80 @@
>> -# Maintainer Patch Review Checklist
>> +# Patch Review
>> +
>> +Anyone can and should review patches. It's the only way to get good at
>> +patch review and for the project to scale.
>> +
>> +## Goals of patch review
>> +
>
> Maybe start with:
>
> 1. Catch typos and obvious mistakes
>
> Everyone does these and usually they are easy to spot for anyone but the
> author.
>
>> +1. Prevent false positive test results
>> +2. Prevent false negative test results
>> +3. Make future changes as easy as possible
>
> I would say that number 3 maybe be a bit controversial, I've seen cases
> where attempts to futureproof the code caused steep increase in the
> test
> complexity. So maybe:
>
> 3. Keep the code as simple as possible as well as futureproof

Perhaps just

3. Keep the code as simple as possibe, but no simpler

This is possibly paraphrasing Einstein:
https://quoteinvestigator.com/2011/05/13/einstein-simple/


NOTE: I think future proofing is actually very dangerous. What I
      probably meant was

      3. Keep the code as simple as possible, while maintaining optionality,
         but if there appears to be a disproportionate increase in complexity
         for an increase in optionality then simplicity takes priority because
         identifying relevant optionality is hard.

      but "optionality" does not have a nice dictionary definition. I guess
      you could substitute it with "freedom". In any case it's not something I
      would want to write in documentation. There is no easy way to
      express it.

>
> Or something along the lines.

>
>> +## How to find clear errors
>> +
>> +A clear error is one where there is unlikely to be any argument if you
>> +provide evidence of it. Evidence being an error trace or logical proof
>> +that an error will occur in a common situation.
>> +
>> +The following are examples and may not be appropriate for all tests.
>> +
>> +* Merge the patch. It should apply cleanly to master.
>> +* Compile the patch with default and non-default configurations.
>> +  - Use sanitizers e.g. undefined behaviour, address.
>> +  - Compile on non-x86
>> +  - Compile on x86 with -m32
>
> Maybe note here that some tests trigger undefined behavior
> intentionally, we do have a few tests that dereference NULL to trigger
> crash, etc.

Indeed the address sanitizer completely breaks a lot of tests and
produces a lot of noise.

>
>> +* Use `make check`
>> +* Run effected tests in a VM
>> +  - Use single vCPU
>> +  - Use many vCPUs and enable NUMA
>> +  - Restrict RAM to < 1GB.
>> +* Run effected tests on an embedded device
>> +* Run effected tests on non-x86 machine in general
>> +* Run reproducers on a kernel where the bug is present
>> +* Run tests with "-i0"
>> +* Compare usage of system calls with man page descriptions
>> +* Compare usage of system calls with kernel code
>> +* Search the LTP library for existing helper functions
>> +
>> +## How to find subtle errors
>> +
>> +A subtle error is one where you can expect some argument because you
>> +do not have clear evidence of an error. It is best to state these as
>> +questions and not make assertions if possible.
>> +
>> +Although if it is a matter of style or "taste" then senior maintainers
>> +can assert what is correct to avoid bike shedding.
>> +
>> +* Ask what happens if there is an error, could it be debugged just
>> +  with the test output?
>> +* Are we testing undefined behaviour?
>> +  - Could future kernel behaviour change without "breaking userland"?
>> +  - Does the kernel behave differently depending on hardware?
>> +  - Does it behave differently depending kernel on configuration?
>> +  - Does it behave differently depending on the compiler?
>> +* Will it scale to tiny and huge systems?
>> +  - What happens if there are 100+ CPUs?
>> +  - What happens if each CPU core is very slow?
>> +  - What happens if there are 2TB or RAM?
>> +* Are we repeating a pattern that can be turned into a library function?
>> +* Is a single test trying to do too much?
>> +* Could multiple similar tests be merged?
>> +* Race conditions
>> +  - What happens if a process gets preempted?
>> +  - Could checkpoints or fuzzsync by used instead?
>> +  - Note, usually you can insert a sleep to prove a race condition
>> +    exists however finding them is hard
>> +* Is there a simpler way to achieve the same kernel coverage?
>> +
>> +## How to get patches merged
>> +
>> +Once you think a patch is good enough you should add your Reviewed-by
>> +tags. This means you will get some credit for getting the patch
>> +merged. Also some blame if there are problems.
>
> Maybe we should mention the Tested-by: tag explicitly here as well.

I'm not sure how we interpret Tested-by when deciding to merge; does it
mean someone is happy for the test to be merged or not?

Should someone add both tags if they have reviewed and tested it?

>
>> +In addition you can expect others to review your patches and add their
>> +tags. This will speed up the process of getting your patches merged.
>> +
>> +## Maintainers Checklist
>>  
>>  Patchset should be tested locally and ideally also in maintainer's fork in
>>  GitHub Actions on GitHub.
>> -- 
>> 2.39.2
>> 


-- 
Thank you,
Richard.


More information about the ltp mailing list