ANDROID: mm: bail out tlb free batching on page zapping when cma is going on

I found sometime cma allocation took a long time to be succeeded
because one of the pages is in the middle of zapping(e.g., munmap, exit)
so alloc_contig_range couldn't migrate the page because it was zero
page mapcount. So, CMA allocator need to wait it until tlb batching
frees the page and the batching free happens on the target process's
context which is quite random, sometimes, very low priority process
on little core. It makes CMA allocation very slow up to several hundreds
millisecond.

To solve the issue, let's make the TLB free batching aware of CMA
progress so whenever cma allocation is going on, TLB free batching
should bail out asap to minimize cma allocation latency.

Bug: 192475091
Signed-off-by: Minchan Kim <minchan@google.com>
Change-Id: Ic76ecff795639085c4372791d922301467563a06
This commit is contained in:
Minchan Kim 2021-06-28 18:47:11 -07:00 committed by Todd Kjos
parent c8578a3e90
commit 9938b82be1

View File

@ -1302,7 +1302,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
page_remove_rmap(page, false);
if (unlikely(page_mapcount(page) < 0))
print_bad_pte(vma, addr, ptent, page);
if (unlikely(__tlb_remove_page(tlb, page))) {
if (unlikely(__tlb_remove_page(tlb, page)) ||
lru_cache_disabled()) {
force_flush = 1;
addr += PAGE_SIZE;
break;