[LTP] [PATCH] unshare03: using soft limit of NOFILE

lufei lufei@uniontech.com
Fri Mar 14 05:42:57 CET 2025


I think it's safer to set NOFILE increasing from soft limit than from
hard limit.

Hard limit may lead to dup2 ENOMEM error which bring the result to
TBROK on little memory machine. (e.g. 2GB memory in my situation, hard
limit in /proc/sys/fs/nr_open come out to be 1073741816)

Signed-off-by: lufei <lufei@uniontech.com>
---
 testcases/kernel/syscalls/unshare/unshare03.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/testcases/kernel/syscalls/unshare/unshare03.c b/testcases/kernel/syscalls/unshare/unshare03.c
index 7c5e71c4e..bb568264c 100644
--- a/testcases/kernel/syscalls/unshare/unshare03.c
+++ b/testcases/kernel/syscalls/unshare/unshare03.c
@@ -24,7 +24,7 @@
 
 static void run(void)
 {
-	int nr_open;
+	int rlim_max;
 	int nr_limit;
 	struct rlimit rlimit;
 	struct tst_clone_args args = {
@@ -32,14 +32,12 @@ static void run(void)
 		.exit_signal = SIGCHLD,
 	};
 
-	SAFE_FILE_SCANF(FS_NR_OPEN, "%d", &nr_open);
-	tst_res(TDEBUG, "Maximum number of file descriptors: %d", nr_open);
+	SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
+	rlim_max = rlimit.rlim_max;
 
-	nr_limit = nr_open + NR_OPEN_LIMIT;
+	nr_limit = rlim_max + NR_OPEN_LIMIT;
 	SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_limit);
 
-	SAFE_GETRLIMIT(RLIMIT_NOFILE, &rlimit);
-
 	rlimit.rlim_cur = nr_limit;
 	rlimit.rlim_max = nr_limit;
 
@@ -47,10 +45,10 @@ static void run(void)
 	tst_res(TDEBUG, "Set new maximum number of file descriptors to : %d",
 		nr_limit);
 
-	SAFE_DUP2(2, nr_open + NR_OPEN_DUP);
+	SAFE_DUP2(2, rlim_max + NR_OPEN_DUP);
 
 	if (!SAFE_CLONE(&args)) {
-		SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", nr_open);
+		SAFE_FILE_PRINTF(FS_NR_OPEN, "%d", rlim_max);
 		TST_EXP_FAIL(unshare(CLONE_FILES), EMFILE);
 		exit(0);
 	}
-- 
2.39.3



More information about the ltp mailing list