[LTP] [PATCH v2] kill01: New case cgroup kill

Wei Gao wegao@suse.com
Mon Mar 6 15:54:58 CET 2023


On Mon, Mar 06, 2023 at 06:16:26PM +0800, Li Wang wrote:
> On Sun, Mar 5, 2023 at 5:14 PM Wei Gao via ltp <ltp@lists.linux.it> wrote:
> 
> > Signed-off-by: Wei Gao <wegao@suse.com>
> > ---
> > +#define pid_num 100
> >
> 
> My concern about defining pid_num as a fixed variable is that
> the test may spend a long time on a single_cpu or slower system.
> A sanity way is probably to use a dynamical number according
> to the test bed available CPUs (e.g. tst_ncpus_available() + 1).
good idea!
> 
> 
> 
> > +static struct tst_cg_group *cg_child_test_simple;
> > +
> > +
> > +static int wait_for_pid(pid_t pid)
> > +{
> > +       int status, ret;
> > +
> > +again:
> > +       ret = waitpid(pid, &status, 0);
> > +       if (ret == -1) {
> > +               if (errno == EINTR)
> > +                       goto again;
> > +
> > +               return -1;
> > +       }
> > +
> > +       if (!WIFEXITED(status))
> > +               return -1;
> > +
> > +       return WEXITSTATUS(status);
> > +}
> > +
> > +/*
> > + * A simple process running in a sleep loop until being
> > + * re-parented.
> > + */
> > +static int child_fn(void)
> > +{
> > +       int ppid = getppid();
> > +
> > +       while (getppid() == ppid)
> > +               usleep(1000);
> > +
> > +       return getppid() == ppid;
> >
> 
> why do we need to return the value of this comparison?
> I suppose most time the child does _not_ have a chance
> to get here.
yes, chance to reach here is small in our scenario, remove
the logic here.
> 
> 
> 
> > +}
> > +
> > +static int cg_run_nowait(const struct tst_cg_group *const cg,
> > +                 int (*fn)(void))
> > +{
> > +       int pid;
> > +
> > +       pid = fork();
> >
> 
> use SAFE_FORK() maybe better.
good catch!
> 
> 
> 
> > +       if (pid == 0) {
> > +               SAFE_CG_PRINTF(cg, "cgroup.procs", "%d", getpid());
> > +               exit(fn());
> > +       }
> > +
> > +       return pid;
> > +}
> > +
> > +static int cg_wait_for_proc_count(const struct tst_cg_group *cg, int
> > count)
> > +{
> > +       char buf[20 * pid_num] = {0};
> > +       int attempts;
> > +       char *ptr;
> > +
> > +       for (attempts = 10; attempts >= 0; attempts--) {
> > +               int nr = 0;
> > +
> > +               SAFE_CG_READ(cg, "cgroup.procs", buf, sizeof(buf));
> > +
> > +               for (ptr = buf; *ptr; ptr++)
> > +                       if (*ptr == '\n')
> > +                               nr++;
> > +
> > +               if (nr >= count)
> > +                       return 0;
> > +
> > +               usleep(100000);
> >
> 
> In this loop, there is only 1 second for waiting for children ready.
> So, if test on a slower/overload machine that is a bit longer than this
> time,
> what will happen? shouldn't we handle this as a corner case failure?
I will increase to 10 second, then if all the children processes can not ready in correct 
cgroup we will take this as a failure case.
> 
> 
> 
> > +       }
> > +
> > +       return -1;
> > +}
> > +
> > +static void run(void)
> > +{
> > +
> > +       pid_t pids[100];
> > +       int i;
> > +
> > +       cg_child_test_simple = tst_cg_group_mk(tst_cg, "cg_test_simple");
> > +
> > +       for (i = 0; i < pid_num; i++)
> > +               pids[i] = cg_run_nowait(cg_child_test_simple, child_fn);
> > +
> > +       TST_EXP_PASS(cg_wait_for_proc_count(cg_child_test_simple,
> > pid_num));
> > +       SAFE_CG_PRINTF(cg_child_test_simple, "cgroup.kill", "%d", 1);
> > +
> > +       for (i = 0; i < pid_num; i++) {
> > +               /* wait_for_pid(pids[i]); */
> > +               TST_EXP_PASS_SILENT(wait_for_pid(pids[i]) == SIGKILL);
> > +       }
> > +
> > +       cg_child_test_simple = tst_cg_group_rm(cg_child_test_simple);
> > +}
> > +
> > +static struct tst_test test = {
> > +       .test_all = run,
> > +       .forks_child = 1,
> > +       .max_runtime = 5,
> > +       .needs_cgroup_ctrls = (const char *const []){ "memory", NULL },
> > +       .needs_cgroup_ver = TST_CG_V2,
> > +};
> > --
> > 2.35.3
> >
> >
> > --
> > Mailing list info: https://lists.linux.it/listinfo/ltp
> >
> >
> 
> -- 
> Regards,
> Li Wang


More information about the ltp mailing list