[LTP] [PATCH v2] unshare03: set nr_open with sizeof(long)*8
Lu Fei
lufei@uniontech.com
Fri Apr 11 08:01:56 CEST 2025
Hi, Li, Jan.
Sorry for so few details about the patch.
The origin code is using nr_open +1024 as fd number limit and then set
fd to nr_open + 64, then set nr_open to origin nr_open in child process
to make unshare(CLONE_FILES) fails with EMFILE. I tested this on my vm
(1cpu, 2GB mem), met dup2 fails with ENOMEM, this make the test case
BROKEN. I was try to fix this.
In patch v2, I was try to use rlimit.rlim_max instead of using
nr_open(read from /proc/sys/fs/nr_open), this works well. But Cyril
gives a better approch, so, I submit patch v3.
Quoted from Cyril's comment:
>
>Ah, we cannot set nr_open to anything that is smaller than BITS_PER_LONG:
>...
>unsigned int sysctl_nr_open __read_mostly = 1024*1024;
>unsigned int sysctl_nr_open_min = BITS_PER_LONG;
>/* our min() is unusable in constant expressions ;-/ */
>#define __const_min(x, y) ((x) < (y) ? (x) : (y))
>unsigned int sysctl_nr_open_max =
> __const_min(INT_MAX, ~(size_t)0/sizeof(void *)) & -BITS_PER_LONG;
>...
>Then we need a filedescriptor that is >= 64 and set the nr_open to 64.
I'm using sizeof(long)*8 is because I was thinking only set
filediscriptor >= BITS_PER_LONG and then set nr_open = BITS_PER_LONG
could make EMFILE happen. Less than BITS_PER_LONG would trigger other
error than EMFILE.
Am I misunderstood Cyril?
Thanks for reply.
On Fri, Apr 11, 2025 at 11:21:23AM +0800, Li Wang wrote:
> On Fri, Apr 11, 2025 at 11:09 AM Li Wang <liwang@redhat.com> wrote:
>
> >
> >
> > On Wed, Apr 9, 2025 at 3:50 PM lufei <lufei@uniontech.com> wrote:
> >
> >> Set nr_open with sizeof(long)*8 to trigger EMFILE, instead of reading
> >> number of filedescriptors limit.
> >>
> >
> > Any new changes in Linux that have made the previous way not work now?
> >
>
> Ah, I see. As you pointed out in v1, that hard limit may lead to dup2
> ENOMEM error which brings the result to TBROK ona small RAM system.
>
> So, I agree Jan, we'd better add more description to the patch.
>
> Reviewed-by: Li Wang <liwang@redhat.com>
>
>
>
> >
> >
> >
> >>
> >> Signed-off-by: lufei <lufei@uniontech.com>
> >> ---
> >> testcases/kernel/syscalls/unshare/unshare03.c | 23 ++-----------------
> >> 1 file changed, 2 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/testcases/kernel/syscalls/unshare/unshare03.c
> >> b/testcases/kernel/syscalls/unshare/unshare03.c
> >> index 7c5e71c4e..c3b98930d 100644
> >> --- a/testcases/kernel/syscalls/unshare/unshare03.c
> >> +++ b/testcases/kernel/syscalls/unshare/unshare03.c
> >> @@ -17,44 +17,25 @@
> >> #include "lapi/sched.h"
> >>
> >> #define FS_NR_OPEN "/proc/sys/fs/nr_open"
> >> -#define NR_OPEN_LIMIT 1024
> >> -#define NR_OPEN_DUP 64
> >>
> >> #ifdef HAVE_UNSHARE
> >>
> >> static void run(void)
> >> {
> >> - int nr_open;
> >> - int nr_limit;
> >> - struct rlimit rlimit;
> >> struct tst_clone_args args = {
> >> .flags = CLONE_FILES,
> >> .exit_signal = SIGCHLD,
> >> };
> >>
> >> - SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
> >> - tst_res(TDEBUG, "Maximum number of file descriptors: %d",
> >> nr_open);
> >> + int nr_open = sizeof(long) * 8;
> >>
> >> - nr_limit = nr_open + NR_OPEN_LIMIT;
> >> - SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
> >> -
> >> - SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
> >> -
> >> - rlimit.rlim_cur = nr_limit;
> >> - rlimit.rlim_max = nr_limit;
> >> -
> >> - SAFE_SETRLIMIT(RLIMIT_NOFILE, &rlimit);
> >> - tst_res(TDEBUG, "Set new maximum number of file descriptors to :
> >> %d",
> >> - nr_limit);
> >> -
> >> - SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
> >> + SAFE_DUP2(2, nr_open + 1);
> >>
> >> if (!SAFE_CLONE(&args)) {
> >> SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
> >> TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
> >> exit(0);
> >> }
> >> -
> >> }
> >>
> >> static void setup(void)
> >> --
> >> 2.39.3
> >>
> >>
> >> --
> >> Mailing list info: https://lists.linux.it/listinfo/ltp
> >>
> >>
> >
> > --
> > Regards,
> > Li Wang
> >
>
>
> --
> Regards,
> Li Wang
More information about the ltp
mailing list