Skip to content

Commit 2769409

Browse files
committed
btrfs: clear defragmented inodes using postorder in btrfs_cleanup_defrag_inodes()
btrfs_cleanup_defrag_inodes() is not called frequently, only in remount or unmount, but the way it frees the inodes in fs_info->defrag_inodes is inefficient. Each time it needs to locate first node, remove it, potentially rebalance tree until it's done. This allows to do a conditional reschedule. For cleanups the rbtree_postorder_for_each_entry_safe() iterator is convenient but we can't reschedule and restart iteration because some of the tree nodes would be already freed. The cleanup operation is kmem_cache_free() which will likely take the fast path for most objects so rescheduling should not be necessary. Reviewed-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
1 parent ffc5316 commit 2769409

1 file changed

Lines changed: 4 additions & 10 deletions

File tree

fs/btrfs/defrag.c

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -212,20 +212,14 @@ static struct inode_defrag *btrfs_pick_defrag_inode(
212212

213213
void btrfs_cleanup_defrag_inodes(struct btrfs_fs_info *fs_info)
214214
{
215-
struct inode_defrag *defrag;
216-
struct rb_node *node;
215+
struct inode_defrag *defrag, *next;
217216

218217
spin_lock(&fs_info->defrag_inodes_lock);
219-
node = rb_first(&fs_info->defrag_inodes);
220-
while (node) {
221-
rb_erase(node, &fs_info->defrag_inodes);
222-
defrag = rb_entry(node, struct inode_defrag, rb_node);
223-
kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
224218

225-
cond_resched_lock(&fs_info->defrag_inodes_lock);
219+
rbtree_postorder_for_each_entry_safe(defrag, next,
220+
&fs_info->defrag_inodes, rb_node)
221+
kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
226222

227-
node = rb_first(&fs_info->defrag_inodes);
228-
}
229223
spin_unlock(&fs_info->defrag_inodes_lock);
230224
}
231225

0 commit comments

Comments
 (0)