[LTP] [PATCH] Add goals of patch review and tips

Petr Vorel pvorel@suse.cz
Tue Mar 14 18:54:38 CET 2023


Hi Richie,

> I see two options for patch review. Either we have a single senior
> maintainer who does most of or it is distributed.

> For now I think it needs to be distributed which is beyond the scope
> of this commit.

> In order to distribute it we need new contributors to review each
> others' work at least for the first few revisions.

> I think that anyone can review a patch if they put the work in to test
> it and try to break it. Then understand why it is broken.

> This commit states some ideas about how to do that, plus some tips for
> more advanced patch review.

Very nice improvements, thanks!
I agree with points Cyril already raised.

Reviewed-by: Petr Vorel <pvorel@suse.cz>

> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
> Cc: Cyril Hrubis <chrubis@suse.cz>
> Cc: Andrea Cervesato <andrea.cervesato@suse.de>
> Cc: Avinesh Kumar <akumar@suse.de>
> Cc: Wei Gao <wegao@suse.com>
> Cc: Petr Vorel <pvorel@suse.cz>
> ---
>  doc/maintainer-patch-review-checklist.txt | 78 ++++++++++++++++++++++-
>  1 file changed, 77 insertions(+), 1 deletion(-)

> diff --git a/doc/maintainer-patch-review-checklist.txt b/doc/maintainer-patch-review-checklist.txt
> index 706b0a516..be0cd0961 100644
> --- a/doc/maintainer-patch-review-checklist.txt
> +++ b/doc/maintainer-patch-review-checklist.txt
> @@ -1,4 +1,80 @@
> -# Maintainer Patch Review Checklist
> +# Patch Review

I'd rename the page to patch-review.txt (can be done later).

> +
> +Anyone can and should review patches. It's the only way to get good at
> +patch review and for the project to scale.
> +
> +## Goals of patch review
> +
> +1. Prevent false positive test results
> +2. Prevent false negative test results
> +3. Make future changes as easy as possible
> +
> +## How to find clear errors
> +
> +A clear error is one where there is unlikely to be any argument if you
> +provide evidence of it. Evidence being an error trace or logical proof
> +that an error will occur in a common situation.
> +
> +The following are examples and may not be appropriate for all tests.
> +
> +* Merge the patch. It should apply cleanly to master.
> +* Compile the patch with default and non-default configurations.
very nit: you sometimes put dot at the end of list item, sometimes not.

> +  - Use sanitizers e.g. undefined behaviour, address.
> +  - Compile on non-x86
> +  - Compile on x86 with -m32
BTW: I suppose nobody bothers about 32bit arm or even other archs.
It's definitely out of scope in SUSE.

> +* Use `make check`
> +* Run effected tests in a VM
> +  - Use single vCPU
> +  - Use many vCPUs and enable NUMA
> +  - Restrict RAM to < 1GB.
> +* Run effected tests on an embedded device
> +* Run effected tests on non-x86 machine in general
Very nice list, which show how hard would be to do a proper testing
(not being run for most of the patches - it's found afterwards, but it's very
good you list it there).

> +* Run reproducers on a kernel where the bug is present
> +* Run tests with "-i0"
`-i0` (for better syntax).

I'd also mention -i100 (or even higher, e.g. -i1100 to catch errors like get
file descriptors exhausted due missing SAFE_CLOSE(fd)).

Also, both of these are already somehow mentioned at "New tests" section, I'd
remove it from there (enough to mention them just once).

> +* Compare usage of system calls with man page descriptions
> +* Compare usage of system calls with kernel code
> +* Search the LTP library for existing helper functions
> +
> +## How to find subtle errors
> +
> +A subtle error is one where you can expect some argument because you
> +do not have clear evidence of an error. It is best to state these as
> +questions and not make assertions if possible.
> +
> +Although if it is a matter of style or "taste" then senior maintainers
> +can assert what is correct to avoid bike shedding.
> +
> +* Ask what happens if there is an error, could it be debugged just
> +  with the test output?
> +* Are we testing undefined behaviour?
> +  - Could future kernel behaviour change without "breaking userland"?
> +  - Does the kernel behave differently depending on hardware?
> +  - Does it behave differently depending kernel on configuration?
> +  - Does it behave differently depending on the compiler?
> +* Will it scale to tiny and huge systems?
> +  - What happens if there are 100+ CPUs?
> +  - What happens if each CPU core is very slow?
> +  - What happens if there are 2TB or RAM?

Again, very good points, even it's hard to test all of these before.

> +* Are we repeating a pattern that can be turned into a library function?
> +* Is a single test trying to do too much?
> +* Could multiple similar tests be merged?
> +* Race conditions
> +  - What happens if a process gets preempted?
> +  - Could checkpoints or fuzzsync by used instead?
> +  - Note, usually you can insert a sleep to prove a race condition
> +    exists however finding them is hard
> +* Is there a simpler way to achieve the same kernel coverage?
> +
> +## How to get patches merged
> +
> +Once you think a patch is good enough you should add your Reviewed-by
> +tags. This means you will get some credit for getting the patch
> +merged. Also some blame if there are problems.
> +
> +In addition you can expect others to review your patches and add their
> +tags. This will speed up the process of getting your patches merged.
> +
> +## Maintainers Checklist

>  Patchset should be tested locally and ideally also in maintainer's fork in
>  GitHub Actions on GitHub.
I'd encourage people to enable GitHub Actions in their forks (I'm not sure how
many maintainers do this; best would be automation [1] [2], but nobody bothers
about CI and I'm sort of burn out driving it myself).

Kind regards,
Petr

[1] https://github.com/linux-test-project/ltp/issues/599
[2] https://github.com/linux-test-project/ltp/issues/600


More information about the ltp mailing list