Skip to content

Commit 01e807e

Browse files
zhangyi089tytso
authored andcommitted
ext4: make online defragmentation support large folios
move_extent_per_page() currently assumes that each folio is the size of PAGE_SIZE and only copies data for one page. ext4_move_extents() should call move_extent_per_page() for each page. To support larger folios, simply modify the calculations for the block start and end offsets within the folio based on the provided range of 'data_offset_in_page' and 'block_len_in_page'. This function will continue to handle PAGE_SIZE of data at a time and will not convert this function to manage an entire folio. Additionally, we use the source folio to copy data, so it doesn't matter if the source and dest folios are different in size. Signed-off-by: Zhang Yi <yi.zhang@huawei.com> Link: https://patch.msgid.link/20250512063319.3539411-8-yi.zhang@huaweicloud.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
1 parent cd9f76d commit 01e807e

1 file changed

Lines changed: 4 additions & 7 deletions

File tree

fs/ext4/move_extent.c

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode,
269269
unsigned int tmp_data_size, data_size, replaced_size;
270270
int i, err2, jblocks, retries = 0;
271271
int replaced_count = 0;
272-
int from = data_offset_in_page << orig_inode->i_blkbits;
272+
int from;
273273
int blocks_per_page = PAGE_SIZE >> orig_inode->i_blkbits;
274274
struct super_block *sb = orig_inode->i_sb;
275275
struct buffer_head *bh = NULL;
@@ -323,11 +323,6 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode,
323323
* hold page's lock, if it is still the case data copy is not
324324
* necessary, just swap data blocks between orig and donor.
325325
*/
326-
327-
VM_BUG_ON_FOLIO(folio_test_large(folio[0]), folio[0]);
328-
VM_BUG_ON_FOLIO(folio_test_large(folio[1]), folio[1]);
329-
VM_BUG_ON_FOLIO(folio_nr_pages(folio[0]) != folio_nr_pages(folio[1]), folio[1]);
330-
331326
if (unwritten) {
332327
ext4_double_down_write_data_sem(orig_inode, donor_inode);
333328
/* If any of extents in range became initialized we have to
@@ -360,6 +355,8 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode,
360355
goto unlock_folios;
361356
}
362357
data_copy:
358+
from = offset_in_folio(folio[0],
359+
orig_blk_offset << orig_inode->i_blkbits);
363360
*err = mext_page_mkuptodate(folio[0], from, from + replaced_size);
364361
if (*err)
365362
goto unlock_folios;
@@ -390,7 +387,7 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode,
390387
if (!bh)
391388
bh = create_empty_buffers(folio[0],
392389
1 << orig_inode->i_blkbits, 0);
393-
for (i = 0; i < data_offset_in_page; i++)
390+
for (i = 0; i < from >> orig_inode->i_blkbits; i++)
394391
bh = bh->b_this_page;
395392
for (i = 0; i < block_len_in_page; i++) {
396393
*err = ext4_get_block(orig_inode, orig_blk_offset + i, bh, 0);

0 commit comments

Comments
 (0)