[LTP] [PATCH 2/2] fill_fs: Ensure written data is not easily compressed

Richard Palethorpe rpalethorpe@suse.de
Tue Dec 6 17:30:02 CET 2022


Hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> Hi!
>> I suppose that instead of writing random lengths we could just copy
>> /dev/urandom to <path> in static chunks of a reasonable size.
>
> Actually it would make sense to do random length writes as well, at
> least for a subset of files. I guess that in real life scenario we would
> encounter both, block writes and randomly sized writes for files.
>
> I would do something as:
>
> #define BLOCK_SIZE 4096
>
> ..
> 	char buf[2*BLOCK_SIZE];
>
> 	fd = SAFE_OPEN("/dev/urandom", O_RDONLY);
> 	SAFE_READ(1, fd, buf, sizeof(buf));
> 	SAFE_CLOSE(fd);
>
> 	...
>
> 	random_size = random() % 2;
>
> 	while (len) {
> 		if (random_size)
> 			len = random() % BOCK_SIZE;
> 		else
> 			len = BLOCK_SIZE;
>
> 		off = random() % BLOCK_SIZE;
>
> 		ret = write(fd, buf + off, len);
>
> 	...
>
>
> But feel free to implement anything that you find sensible.

What are we trying to do though, simply fill the device to test the
ENOSPC condition or some kind of poor man's fuzzing?

-- 
Thank you,
Richard.


More information about the ltp mailing list