Skip to content

Commit 84bf1ac

Browse files
jacob-kelleranguy11
authored andcommitted
ice: fix Rx page leak on multi-buffer frames
The ice_put_rx_mbuf() function handles calling ice_put_rx_buf() for each buffer in the current frame. This function was introduced as part of handling multi-buffer XDP support in the ice driver. It works by iterating over the buffers from first_desc up to 1 plus the total number of fragments in the frame, cached from before the XDP program was executed. If the hardware posts a descriptor with a size of 0, the logic used in ice_put_rx_mbuf() breaks. Such descriptors get skipped and don't get added as fragments in ice_add_xdp_frag. Since the buffer isn't counted as a fragment, we do not iterate over it in ice_put_rx_mbuf(), and thus we don't call ice_put_rx_buf(). Because we don't call ice_put_rx_buf(), we don't attempt to re-use the page or free it. This leaves a stale page in the ring, as we don't increment next_to_alloc. The ice_reuse_rx_page() assumes that the next_to_alloc has been incremented properly, and that it always points to a buffer with a NULL page. Since this function doesn't check, it will happily recycle a page over the top of the next_to_alloc buffer, losing track of the old page. Note that this leak only occurs for multi-buffer frames. The ice_put_rx_mbuf() function always handles at least one buffer, so a single-buffer frame will always get handled correctly. It is not clear precisely why the hardware hands us descriptors with a size of 0 sometimes, but it happens somewhat regularly with "jumbo frames" used by 9K MTU. To fix ice_put_rx_mbuf(), we need to make sure to call ice_put_rx_buf() on all buffers between first_desc and next_to_clean. Borrow the logic of a similar function in i40e used for this same purpose. Use the same logic also in ice_get_pgcnts(). Instead of iterating over just the number of fragments, use a loop which iterates until the current index reaches to the next_to_clean element just past the current frame. Unlike i40e, the ice_put_rx_mbuf() function does call ice_put_rx_buf() on the last buffer of the frame indicating the end of packet. For non-linear (multi-buffer) frames, we need to take care when adjusting the pagecnt_bias. An XDP program might release fragments from the tail of the frame, in which case that fragment page is already released. Only update the pagecnt_bias for the first descriptor and fragments still remaining post-XDP program. Take care to only access the shared info for fragmented buffers, as this avoids a significant cache miss. The xdp_xmit value only needs to be updated if an XDP program is run, and only once per packet. Drop the xdp_xmit pointer argument from ice_put_rx_mbuf(). Instead, set xdp_xmit in the ice_clean_rx_irq() function directly. This avoids needing to pass the argument and avoids an extra bit-wise OR for each buffer in the frame. Move the increment of the ntc local variable to ensure its updated *before* all calls to ice_get_pgcnts() or ice_put_rx_mbuf(), as the loop logic requires the index of the element just after the current frame. Now that we use an index pointer in the ring to identify the packet, we no longer need to track or cache the number of fragments in the rx_ring. Cc: Christoph Petrausch <christoph.petrausch@deepl.com> Cc: Jesper Dangaard Brouer <hawk@kernel.org> Reported-by: Jaroslav Pulchart <jaroslav.pulchart@gooddata.com> Closes: https://lore.kernel.org/netdev/CAK8fFZ4hY6GUJNENz3wY9jaYLZXGfpr7dnZxzGMYoE44caRbgw@mail.gmail.com/ Fixes: 743bbd9 ("ice: put Rx buffers after being done with current frame") Tested-by: Michal Kubiak <michal.kubiak@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Tested-by: Priya Singh <priyax.singh@intel.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
1 parent 93ab488 commit 84bf1ac

2 files changed

Lines changed: 34 additions & 47 deletions

File tree

drivers/net/ethernet/intel/ice/ice_txrx.c

Lines changed: 34 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -894,10 +894,6 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
894894
__skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
895895
rx_buf->page_offset, size);
896896
sinfo->xdp_frags_size += size;
897-
/* remember frag count before XDP prog execution; bpf_xdp_adjust_tail()
898-
* can pop off frags but driver has to handle it on its own
899-
*/
900-
rx_ring->nr_frags = sinfo->nr_frags;
901897

902898
if (page_is_pfmemalloc(rx_buf->page))
903899
xdp_buff_set_frag_pfmemalloc(xdp);
@@ -968,20 +964,20 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
968964
/**
969965
* ice_get_pgcnts - grab page_count() for gathered fragments
970966
* @rx_ring: Rx descriptor ring to store the page counts on
967+
* @ntc: the next to clean element (not included in this frame!)
971968
*
972969
* This function is intended to be called right before running XDP
973970
* program so that the page recycling mechanism will be able to take
974971
* a correct decision regarding underlying pages; this is done in such
975972
* way as XDP program can change the refcount of page
976973
*/
977-
static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
974+
static void ice_get_pgcnts(struct ice_rx_ring *rx_ring, unsigned int ntc)
978975
{
979-
u32 nr_frags = rx_ring->nr_frags + 1;
980976
u32 idx = rx_ring->first_desc;
981977
struct ice_rx_buf *rx_buf;
982978
u32 cnt = rx_ring->count;
983979

984-
for (int i = 0; i < nr_frags; i++) {
980+
while (idx != ntc) {
985981
rx_buf = &rx_ring->rx_buf[idx];
986982
rx_buf->pgcnt = page_count(rx_buf->page);
987983

@@ -1154,62 +1150,51 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
11541150
}
11551151

11561152
/**
1157-
* ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
1153+
* ice_put_rx_mbuf - ice_put_rx_buf() caller, for all buffers in frame
11581154
* @rx_ring: Rx ring with all the auxiliary data
11591155
* @xdp: XDP buffer carrying linear + frags part
1160-
* @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
1161-
* @ntc: a current next_to_clean value to be stored at rx_ring
1156+
* @ntc: the next to clean element (not included in this frame!)
11621157
* @verdict: return code from XDP program execution
11631158
*
1164-
* Walk through gathered fragments and satisfy internal page
1165-
* recycle mechanism; we take here an action related to verdict
1166-
* returned by XDP program;
1159+
* Called after XDP program is completed, or on error with verdict set to
1160+
* ICE_XDP_CONSUMED.
1161+
*
1162+
* Walk through buffers from first_desc to the end of the frame, releasing
1163+
* buffers and satisfying internal page recycle mechanism. The action depends
1164+
* on verdict from XDP program.
11671165
*/
11681166
static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
1169-
u32 *xdp_xmit, u32 ntc, u32 verdict)
1167+
u32 ntc, u32 verdict)
11701168
{
1171-
u32 nr_frags = rx_ring->nr_frags + 1;
11721169
u32 idx = rx_ring->first_desc;
11731170
u32 cnt = rx_ring->count;
1174-
u32 post_xdp_frags = 1;
11751171
struct ice_rx_buf *buf;
1176-
int i;
1172+
u32 xdp_frags = 0;
1173+
int i = 0;
11771174

11781175
if (unlikely(xdp_buff_has_frags(xdp)))
1179-
post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
1176+
xdp_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
11801177

1181-
for (i = 0; i < post_xdp_frags; i++) {
1178+
while (idx != ntc) {
11821179
buf = &rx_ring->rx_buf[idx];
1180+
if (++idx == cnt)
1181+
idx = 0;
11831182

1184-
if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
1183+
/* An XDP program could release fragments from the end of the
1184+
* buffer. For these, we need to keep the pagecnt_bias as-is.
1185+
* To do this, only adjust pagecnt_bias for fragments up to
1186+
* the total remaining after the XDP program has run.
1187+
*/
1188+
if (verdict != ICE_XDP_CONSUMED)
11851189
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
1186-
*xdp_xmit |= verdict;
1187-
} else if (verdict & ICE_XDP_CONSUMED) {
1190+
else if (i++ <= xdp_frags)
11881191
buf->pagecnt_bias++;
1189-
} else if (verdict == ICE_XDP_PASS) {
1190-
ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
1191-
}
11921192

11931193
ice_put_rx_buf(rx_ring, buf);
1194-
1195-
if (++idx == cnt)
1196-
idx = 0;
1197-
}
1198-
/* handle buffers that represented frags released by XDP prog;
1199-
* for these we keep pagecnt_bias as-is; refcount from struct page
1200-
* has been decremented within XDP prog and we do not have to increase
1201-
* the biased refcnt
1202-
*/
1203-
for (; i < nr_frags; i++) {
1204-
buf = &rx_ring->rx_buf[idx];
1205-
ice_put_rx_buf(rx_ring, buf);
1206-
if (++idx == cnt)
1207-
idx = 0;
12081194
}
12091195

12101196
xdp->data = NULL;
12111197
rx_ring->first_desc = ntc;
1212-
rx_ring->nr_frags = 0;
12131198
}
12141199

12151200
/**
@@ -1317,6 +1302,10 @@ static int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
13171302
/* retrieve a buffer from the ring */
13181303
rx_buf = ice_get_rx_buf(rx_ring, size, ntc);
13191304

1305+
/* Increment ntc before calls to ice_put_rx_mbuf() */
1306+
if (++ntc == cnt)
1307+
ntc = 0;
1308+
13201309
if (!xdp->data) {
13211310
void *hard_start;
13221311

@@ -1325,24 +1314,23 @@ static int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
13251314
xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
13261315
xdp_buff_clear_frags_flag(xdp);
13271316
} else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
1328-
ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED);
1317+
ice_put_rx_mbuf(rx_ring, xdp, ntc, ICE_XDP_CONSUMED);
13291318
break;
13301319
}
1331-
if (++ntc == cnt)
1332-
ntc = 0;
13331320

13341321
/* skip if it is NOP desc */
13351322
if (ice_is_non_eop(rx_ring, rx_desc))
13361323
continue;
13371324

1338-
ice_get_pgcnts(rx_ring);
1325+
ice_get_pgcnts(rx_ring, ntc);
13391326
xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc);
13401327
if (xdp_verdict == ICE_XDP_PASS)
13411328
goto construct_skb;
13421329
total_rx_bytes += xdp_get_buff_len(xdp);
13431330
total_rx_pkts++;
13441331

1345-
ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
1332+
ice_put_rx_mbuf(rx_ring, xdp, ntc, xdp_verdict);
1333+
xdp_xmit |= xdp_verdict & (ICE_XDP_TX | ICE_XDP_REDIR);
13461334

13471335
continue;
13481336
construct_skb:
@@ -1355,7 +1343,7 @@ static int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
13551343
rx_ring->ring_stats->rx_stats.alloc_buf_failed++;
13561344
xdp_verdict = ICE_XDP_CONSUMED;
13571345
}
1358-
ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
1346+
ice_put_rx_mbuf(rx_ring, xdp, ntc, xdp_verdict);
13591347

13601348
if (!skb)
13611349
break;

drivers/net/ethernet/intel/ice/ice_txrx.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -358,7 +358,6 @@ struct ice_rx_ring {
358358
struct ice_tx_ring *xdp_ring;
359359
struct ice_rx_ring *next; /* pointer to next ring in q_vector */
360360
struct xsk_buff_pool *xsk_pool;
361-
u32 nr_frags;
362361
u16 max_frame;
363362
u16 rx_buf_len;
364363
dma_addr_t dma; /* physical address of ring */

0 commit comments

Comments
 (0)