Skip to content

Commit 37cb0aa

Browse files
Yang Shictmarinas
authored andcommitted
arm64: mm: make linear mapping permission update more robust for patial range
The commit fcf8dda ("arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings") made permission update for partial range more robust. But the linear mapping permission update still assumes update the whole range by iterating from the first page all the way to the last page of the area. Make it more robust by updating the linear mapping permission from the page mapped by start address, and update the number of numpages. Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Dev Jain <dev.jain@arm.com> Signed-off-by: Yang Shi <yang@os.amperecomputing.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
1 parent c320dbb commit 37cb0aa

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

arch/arm64/mm/pageattr.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,6 @@ static int change_memory_common(unsigned long addr, int numpages,
148148
unsigned long size = PAGE_SIZE * numpages;
149149
unsigned long end = start + size;
150150
struct vm_struct *area;
151-
int i;
152151

153152
if (!PAGE_ALIGNED(addr)) {
154153
start &= PAGE_MASK;
@@ -184,8 +183,9 @@ static int change_memory_common(unsigned long addr, int numpages,
184183
*/
185184
if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY ||
186185
pgprot_val(clear_mask) == PTE_RDONLY)) {
187-
for (i = 0; i < area->nr_pages; i++) {
188-
__change_memory_common((u64)page_address(area->pages[i]),
186+
unsigned long idx = (start - (unsigned long)area->addr) >> PAGE_SHIFT;
187+
for (; numpages; idx++, numpages--) {
188+
__change_memory_common((u64)page_address(area->pages[idx]),
189189
PAGE_SIZE, set_mask, clear_mask);
190190
}
191191
}

0 commit comments

Comments
 (0)