[LTP] [PATCH 2/2] fill_fs: Ensure written data is not easily compressed

Cyril Hrubis chrubis@suse.cz
Tue Dec 6 17:15:46 CET 2022


Hi!
> I suppose that instead of writing random lengths we could just copy
> /dev/urandom to <path> in static chunks of a reasonable size.

Actually it would make sense to do random length writes as well, at
least for a subset of files. I guess that in real life scenario we would
encounter both, block writes and randomly sized writes for files.

I would do something as:

#define BLOCK_SIZE 4096

..
	char buf[2*BLOCK_SIZE];

	fd = SAFE_OPEN("/dev/urandom", O_RDONLY);
	SAFE_READ(1, fd, buf, sizeof(buf));
	SAFE_CLOSE(fd);

	...

	random_size = random() % 2;

	while (len) {
		if (random_size)
			len = random() % BOCK_SIZE;
		else
			len = BLOCK_SIZE;

		off = random() % BLOCK_SIZE;

		ret = write(fd, buf + off, len);

	...


But feel free to implement anything that you find sensible.

-- 
Cyril Hrubis
chrubis@suse.cz


More information about the ltp mailing list