<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-size:small"><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 9, 2020 at 4:46 PM Li Wang <<a href="mailto:liwang@redhat.com">liwang@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div style="font-size:small"><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Sep 8, 2020 at 11:36 PM Cyril Hrubis <<a href="mailto:chrubis@suse.cz" target="_blank">chrubis@suse.cz</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi!<br>
> And I'd like to add the MAP_GROWSDOWN test too, but I haven't come up with<br>
> a way to solve the segment fault on s390x.<br>
> <a href="http://lists.linux.it/pipermail/ltp/2020-August/018416.html" rel="noreferrer" target="_blank">http://lists.linux.it/pipermail/ltp/2020-August/018416.html</a><br>
<br>
Maybe we end up allocating a mapping that is too close to something<br>
else, see:<br>
<br>
<a href="https://stackoverflow.com/questions/56888725/why-is-map-growsdown-mapping-does-not-grow" rel="noreferrer" target="_blank">https://stackoverflow.com/questions/56888725/why-is-map-growsdown-mapping-does-not-grow</a><br>
<br>
I guess that we should make the initial mmap in find_free_range() larger<br>
and return and adress that is quaranteed not to have a mapping that is<br>
closer than 256 pages in the direction we want to grow.<br></blockquote><div><br></div><div style="font-size:small">Sounds reasonable. I tried to reserve more space for the mapping grows, and that works for me:).</div></div></div></blockquote><div><br></div><div class="gmail_default" style="font-size:small">To precisely, we could reserve 256 pages size at the end of the free-range</div><div class="gmail_default" style="font-size:small">memory to let the stack keep away from a preceding mapping in its growing</div><div class="gmail_default" style="font-size:small">then.</div><div class="gmail_default" style="font-size:small">(my only concern is the stack_guard_gap can be changed via kernel command</div><div class="gmail_default" style="font-size:small">line, but I assume that happen rarely, so here use the default 256 pages)</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">If there is no objection, I'd make these changes in patch V4.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">--------</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">static void *find_free_range(size_t size)<br>{<br> void *mem;<br> long stack_guard_gap = 256 * getpageszie();<br><br> /*<br> * Since the newer kernel does not allow a MAP_GROWSDOWN mapping to grow<br> * closer than stack_guard_gap pages away from a preceding mapping.<br> * The guard then ensures that the next-highest mapped page remains more<br> * than stack_guard_gap below the lowest stack address, and if not then<br> * it will trigger a segfault. So, here let's reserve 256 pages memory<br> * spacing for stack growing safely.<br> */<br> mem = SAFE_MMAP(NULL, size + stack_guard_gap, PROT_READ | PROT_WRITE,<br> MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);<br> SAFE_MUNMAP(mem, size + stack_guard_gap);<br><br> return mem;<br>}<br><br>static void split_unmapped_plus_stack(void *start, size_t size)<br>{<br> /* start start + size<br> * +---------------------+----------------------+-----------+<br> * + unmapped | mapped | 256 pages |<br> * +---------------------+----------------------+-----------+<br> * stack<br> */<br> stack = SAFE_MMAP(start + size, size, PROT_READ | PROT_WRITE,<br> MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS | MAP_GROWSDOWN,<br> -1, 0);<br>}<br></div></div><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>Regards,<br></div><div>Li Wang<br></div></div></div></div>