Skip to content

Commit d8d9bbb

Browse files
dgchinnerdchinner
authored andcommitted
xfs: reduce the number of atomic when locking a buffer after lookup
Avoid an extra atomic operation in the non-trylock case by only doing a trylock if the XBF_TRYLOCK flag is set. This follows the pattern in the IO path with NOWAIT semantics where the "trylock-fail-lock" path showed 5-10% reduced throughput compared to just using single lock call when not under NOWAIT conditions. So make that same change here, too. See commit 942491c ("xfs: fix AIM7 regression") for details. Signed-off-by: Dave Chinner <dchinner@redhat.com> [hch: split from a larger patch] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <djwong@kernel.org>
1 parent 3480008 commit d8d9bbb

1 file changed

Lines changed: 3 additions & 2 deletions

File tree

fs/xfs/xfs_buf.c

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -534,11 +534,12 @@ xfs_buf_find_lock(
534534
struct xfs_buf *bp,
535535
xfs_buf_flags_t flags)
536536
{
537-
if (!xfs_buf_trylock(bp)) {
538-
if (flags & XBF_TRYLOCK) {
537+
if (flags & XBF_TRYLOCK) {
538+
if (!xfs_buf_trylock(bp)) {
539539
XFS_STATS_INC(bp->b_mount, xb_busy_locked);
540540
return -EAGAIN;
541541
}
542+
} else {
542543
xfs_buf_lock(bp);
543544
XFS_STATS_INC(bp->b_mount, xb_get_locked_waited);
544545
}

0 commit comments

Comments
 (0)