[LTP] Identify current test coverage and clarify contribution opportunities
Cyril Hrubis
chrubis@suse.cz
Wed Apr 17 10:13:27 CEST 2024
Hi!
> I'm Luigi Pellecchia, a Principal SW Quality Engineer at Red Hat.
> I developed an Open Source Software Quality Management Tool, named "BASIL
> The FuSa Spice" that can help the LTP keep track of the test case coverage
> against man pages and to clarify contribution opportunities to new members.
> I prepared an initial demo I shared on LinkedIn at
> https://www.linkedin.com/posts/luigi-pellecchia_how-basil-can-help-linux-test-project-to-activity-7186248090129956864-d-vC?utm_source=share&utm_medium=member_desktop
> This tool is under the hood of ELISA (Linux Foundation) github at
> https://github.com/elisa-tech/BASIL
>
> Any feedback will be greatly appreciated
Sorry to break it to you but this is not going to work at all for a
couple of reasons.
Firstly man pages are not complete enough and majority of the kernel
interfaces are completely undocumented and this is not going to get
fixed anytime soon. So any metric based on man pages is doomed to fail.
Secondly from the demo it looks like there is a major manual effort
required to pair man page snippets with testcases, which needs to be
redone each time any of them changes. There are thousands of tests in
LTP, going over them would take years of manpower, that is better spend
elsewhere. We have very obvious gaps in coverage so writing new tests
for subsystem that are sparsely covered is way better than trying to
identify minor coverage gaps in existing tests.
Thirdly writing tests to cover API specification is not exactly the best
strategy, it has been tried before and it didn't produce resonable
results. That may work for very simple libraries but for anything more
complex the reality is more tricky and useful tests often require clever
thinking. The prime example of this is the open posix testsuite inside
LTP where they tried to write tests for each assertion from POSIX. That
often lead to nonsensical tests and we are stil trying to clean up the
fallout from that. Also if you look at any kernel regression tests,
which are the most useful ones, the code does not follow any assertions
from man pages, it usuall does wild stuff that is not documented
anywhere. The most useful tests we have were written with thinking
outside of the box, which is not something you can achieve when trying
to adhere pedantically to a specification.
Also if you look at the example from your presentation, you pointed out
that nanosleep() is not tested againts EFAULT, which is not really
useful to be honest. Sure we should add that testcase, but in 99% of the
cases the userspace buffers are copied to kernel by a common function.
That means that it's very unlikely that we wouldn't catch a problem in
that function since we have thousands of tests that actually check for
EFAULT handling in syscalls. Do you see how pedantic comparsion of
manual pages against tests can easily lead you to something that is
not that useful?
To sum it up, this does not look very useful and has potential to divert
manpower from where it's needed most i.e. actuall test writing.
--
Cyril Hrubis
chrubis@suse.cz
More information about the ltp
mailing list