[LTP] [PATCH v2 2/2] doc: Add ground rules page
Li Wang
liwang@redhat.com
Tue Dec 16 08:07:01 CET 2025
On Mon, Dec 15, 2025 at 11:01 PM Andrea Cervesato <andrea.cervesato@suse.com>
wrote:
> Hi!
>
> On Mon Dec 15, 2025 at 3:30 PM CET, Petr Vorel wrote:
> > > Another *important* rule concerns artificial intelligence. I've noticed
> > > some beginners submitting LTP patches directly generated by AI tools.
> > > This puts a significant burden on patch review, as AI can sometimes
> > > introduce a weird/unreliable perspective into the code.
> >
> > > Be careful when using AI tools
> > +1 I like this title.
> >
> > > ========================
> > > AI tools can be useful for executing, summarizing, or suggesting
> approaches,
> > > but they can also be confidently wrong and give an illusion of
> correctness.
> > > Treat AI output as untrusted: verify claims against the code,
> documentation,
> > > and actual behavior on a reproducer.
> >
> > > Do not send AI-generated changes as raw patches. AI-generated diffs
> often
> > > contain
> > > irrelevant churn, incorrect assumptions, inconsistent style, or subtle
> > > bugs, which
> > > creates additional burden for maintainers to review and fix.
> >
> > > Best practice is to write your own patches and have them reviewed by AI
> > > before
> > > submitting them, which helps add beneficial improvements to your work.
> >
> > Hopefully the last paragraph will be understand how it is meant. Because
> we
> > really don't want to encourage people to send something generated by AI
> they
> > don't really understand at all. I'd consider not suggesting any AI.
> >
> > I remember briefly reading kernel folks discussing their policy [1]:
> >
>
> There's nothing wrong with AI usage nowadays, since it's proven that
> they can shine on certain specific tasks. In general, code generation
> works bad, especially inside the kernel development. And in LTP,
> obviously.
>
> But when it comes to correct commit messages, learning what a certain
> code is doing or understanding compile errors, they can be useful.
>
Exactly!
Using AI wisely can speed up debugging work, but user experience is
ultimately needed to determine its correctness.
> Said so, I like the Li approach, because it gives to AI the right place,
> without expanding its boundaries which are well defined and well known.
>
Thanks!
--
Regards,
Li Wang
More information about the ltp
mailing list