BACKPORT: userfaultfd: fix mmap_changing checking in mfill_atomic_hugetlb

In mfill_atomic_hugetlb(), mmap_changing isn't being checked
again if we drop mmap_lock and reacquire it. When the lock is not held,
mmap_changing could have been incremented. This is also inconsistent
with the behavior in mfill_atomic().

Link: https://lkml.kernel.org/r/20240117223729.1444522-1-lokeshgidra@google.com
Fixes: df2cc96e77 ("userfaultfd: prevent non-cooperative events vs mcopy_atomic races")
Signed-off-by: Lokesh Gidra <lokeshgidra@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Brian Geffon <bgeffon@google.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Nicolas Geoffray <ngeoffray@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

(cherry picked from commit 67695f18d55924b2013534ef3bdc363bc9e14605)
Conflicts:
	mm/userfaultfd.c

1. Update mfill_atomic_hugetlb() parameters to pass 'wp_copy' and 'mode'
   instead of 'flags'.

Bug: 320478828
Bug: 339845931
Signed-off-by: Lokesh Gidra <lokeshgidra@google.com>
(cherry picked from https://android-review.googlesource.com/q/commit:ac96edb501b6313955813ec4b4ae475867cf3751)
Merged-In: I11ef09b2b8e477c32cc731205fd48b25bcbd020f
Change-Id: I11ef09b2b8e477c32cc731205fd48b25bcbd020f
This commit is contained in:
Lokesh Gidra 2024-01-17 14:37:29 -08:00 committed by Omkar Sai Sandeep Katadi
parent b1f6aee5b6
commit f45dbaa8af

View File

@ -328,7 +328,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
unsigned long src_start,
unsigned long len,
enum mcopy_atomic_mode mode,
bool wp_copy)
bool wp_copy,
atomic_t *mmap_changing)
{
int vm_shared = dst_vma->vm_flags & VM_SHARED;
ssize_t err;
@ -445,6 +446,15 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
goto out;
}
mmap_read_lock(dst_mm);
/*
* If memory mappings are changing because of non-cooperative
* operation (e.g. mremap) running in parallel, bail out and
* request the user to retry later
*/
if (mmap_changing && atomic_read(mmap_changing)) {
err = -EAGAIN;
break;
}
dst_vma = NULL;
goto retry;
@ -481,7 +491,8 @@ extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
unsigned long src_start,
unsigned long len,
enum mcopy_atomic_mode mode,
bool wp_copy);
bool wp_copy,
atomic_t *mmap_changing);
#endif /* CONFIG_HUGETLB_PAGE */
static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm,
@ -602,7 +613,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
if (is_vm_hugetlb_page(dst_vma))
return __mcopy_atomic_hugetlb(dst_mm, dst_vma, dst_start,
src_start, len, mcopy_mode,
wp_copy);
wp_copy, mmap_changing);
if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
goto out_unlock;