[LTP] [linux-next:master] [block/bdev] 3c20917120: BUG:sleeping_function_called_from_invalid_context_at_mm/util.c
Luis Chamberlain
mcgrof@kernel.org
Sat Mar 29 01:08:38 CET 2025
On Fri, Mar 28, 2025 at 12:09:06PM -0700, Luis Chamberlain wrote:
> On Fri, Mar 28, 2025 at 02:48:00AM -0700, Luis Chamberlain wrote:
> > On Thu, Mar 27, 2025 at 09:21:30PM -0700, Luis Chamberlain wrote:
> > > Would the extra ref check added via commit 060913999d7a9e50 ("mm:
> > > migrate: support poisoned recover from migrate folio") make the removal
> > > of the spin lock safe now given all the buffers are locked from the
> > > folio? This survives some basic sanity checks on my end with
> > > generic/750 against ext4 and also filling a drive at the same time with
> > > fio. I have a feeling is we are not sure, do we have a reproducer for
> > > the issue reported through ebdf4de5642fb6 ("mm: migrate: fix reference
> > > check race between __find_get_block() and migration")? I suspect the
> > > answer is no.
>
> Sebastian, David, is there a reason CONFIG_DEBUG_ATOMIC_SLEEP=y won't
> trigger a atomic sleeping context warning when cond_resched() is used?
> Syzbot and 0-day had ways to reproduce it a kernel warning under these
> conditions, but this config didn't, and require dan explicit might_sleep()
>
> CONFIG_PREEMPT_BUILD=y
> CONFIG_ARCH_HAS_PREEMPT_LAZY=y
> # CONFIG_PREEMPT_NONE is not set
> # CONFIG_PREEMPT_VOLUNTARY is not set
> CONFIG_PREEMPT=y
> # CONFIG_PREEMPT_LAZY is not set
> # CONFIG_PREEMPT_RT is not set
> CONFIG_PREEMPT_COUNT=y
> CONFIG_PREEMPTION=y
> CONFIG_PREEMPT_DYNAMIC=y
> CONFIG_PREEMPT_RCU=y
> CONFIG_HAVE_PREEMPT_DYNAMIC=y
> CONFIG_HAVE_PREEMPT_DYNAMIC_CALL=y
> CONFIG_PREEMPT_NOTIFIERS=y
> CONFIG_DEBUG_PREEMPT=y
> CONFIG_PREEMPTIRQ_TRACEPOINTS=y
> # CONFIG_PREEMPT_TRACER is not set
> # CONFIG_PREEMPTIRQ_DELAY_TEST is not set
>
> Are there some preemption configs under which cond_resched() won't
> trigger a kernel splat where expected so the only thing I can think of
> is perhaps some preempt configs don't implicate a sleep? If true,
> instead of adding might_sleep() to one piece of code (in this case
> foio_mc_copy()) I wonder if instead just adding it to cond_resched() may
> be useful.
I think the answer to the above is "no".
And it took me quite some more testing with the below patch to convince myself
of that. Essentially, to trigger the cond_resched() atomic context warning
kernel warning we'd need to be in atomic context, and that today we can get
there through folio_mc_copy() through large folios.
Today the only atomic context we know which would end up in page
migration and folio_mc_copy() would be with buffer-head filesystems
which support large folios and which use buffer_migrate_folio_norefs() for
their migrate_folio() callback. The patch which we added which enabled the
block layer to support large folios did this only for cases where the
block size of the backing device is > PAGE_SIZE. So for instance your
qemu guest would need to have a logical block size larer than 4096 on
x86_64. To be clear, ext4 cannot possibly trigger this. No filesystem
can trigger this *case* other than the block device cache, and that
is only possible if block devices have larger block sizes.
The whole puzzle above about cond_resched() not rigger atomic warning
is because in fact, although buffer_migrate_folio_norefs() *does* always
use atomic context to call filemap_migrate_folio(), in practice I'm not
seeing it, that is, we likley bail before we even call folio_mc_copy().
So for instance we can see:
Mar 28 23:22:04 extra-ext4-4k kernel: __buffer_migrate_folio() in_atomic: 1
Mar 28 23:22:04 extra-ext4-4k kernel: __buffer_migrate_folio() in_atomic: 1
Mar 28 23:23:11 extra-ext4-4k kernel: large folios on folio_mc_copy(): 512 in_atomic(): 0
Mar 28 23:23:11 extra-ext4-4k kernel: large folios on folio_mc_copy(): in_atomic(): 0 calling cond_resched()
Mar 28 23:23:11 extra-ext4-4k kernel: large folios on folio_mc_copy(): in_atomic(): 0 calling cond_resched()
diff --git a/block/bdev.c b/block/bdev.c
index 4844d1e27b6f..1db9edfc4bc1 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -147,6 +147,11 @@ static void set_init_blocksize(struct block_device *bdev)
break;
bsize <<= 1;
}
+
+ if (bsize > PAGE_SIZE)
+ printk("%s: LBS device: mapping_set_folio_min_order(%u): %u\n",
+ bdev->bd_disk->disk_name, get_order(bsize), bsize);
+
BD_INODE(bdev)->i_blkbits = blksize_bits(bsize);
mapping_set_folio_min_order(BD_INODE(bdev)->i_mapping,
get_order(bsize));
diff --git a/mm/migrate.c b/mm/migrate.c
index f3ee6d8d5e2e..210df4970573 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -851,6 +851,7 @@ static int __buffer_migrate_folio(struct address_space *mapping,
recheck_buffers:
busy = false;
spin_lock(&mapping->i_private_lock);
+ printk("__buffer_migrate_folio() in_atomic: %d\n", in_atomic());
bh = head;
do {
if (atomic_read(&bh->b_count)) {
@@ -871,6 +872,8 @@ static int __buffer_migrate_folio(struct address_space *mapping,
}
}
+ if (check_refs)
+ printk("__buffer_migrate_folio() calling filemap_migrate_folio() in_atomic: %d\n", in_atomic());
rc = filemap_migrate_folio(mapping, dst, src, mode);
if (rc != MIGRATEPAGE_SUCCESS)
goto unlock_buffers;
diff --git a/mm/util.c b/mm/util.c
index 448117da071f..61c76712d4bb 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -735,11 +735,15 @@ int folio_mc_copy(struct folio *dst, struct folio *src)
long nr = folio_nr_pages(src);
long i = 0;
+ if (nr > 1)
+ printk("large folios on folio_mc_copy(): %lu in_atomic(): %d\n", nr, in_atomic());
+
for (;;) {
if (copy_mc_highpage(folio_page(dst, i), folio_page(src, i)))
return -EHWPOISON;
if (++i == nr)
break;
+ printk("large folios on folio_mc_copy(): in_atomic(): %d calling cond_resched()\n", in_atomic());
cond_resched();
}
And so effectively, it is true, cond_resched() is not in atomic context
above, even though filemap_migrate_folio() is certainly being called
in atomic context. What changes in between is folios likely won't
migrate due to later checks in filemap_migrate_folio() like the new
ref check, and instead we end up with page migraiton later of a huge
page, and *that* is not in atomic context.
So, to be clear, I *still* cannot reproduce the original reports, even
though in theory it is evident how buffer_migrate_folio_norefs() *can*
call filemap_migrate_folio() in atomic context.
How 0-day and syzbot triggered this *without* a large block size block
device is perplexing to me, if it is true that one was not used.
How we still can't reproduce in_atomic() context in folio_mc_copy() is
another fun mystery.
That is to say, I can't see how the existing code could regress here.
Given only the only buffer-head filesystem which enables large folios
is the pseudo block device cache filesystem, and you'll only get LBS
devices if the logical block size > PAGE_SIZE.
Despite all this, we have two separate reports and no clear information
if this was using a large block device enabled or not, and so given the
traces above to help root out more bugs with large folios we should just
proactively add might_sleep() to __migrate_folio(). I'll send a patch
for that, that'll enhance our test coverage.
The reason why we likely are having hard time to reproduce the issue is
this new check:
/* Check whether src does not
have extra refs before we do more work */
if (folio_ref_count(src) != expected_count)
return -EAGAIN; .
So, moving on, I think what's best is to see how we can get __find_get_block()
to not chug on during page migration.
Luis
More information about the ltp
mailing list