Skip to content

Commit f09b0ad

Browse files
matttbekuba-moo
authored andcommitted
mptcp: close subflow when receiving TCP+FIN
When a peer decides to close one subflow in the middle of a connection having multiple subflows, the receiver of the first FIN should accept that, and close the subflow on its side as well. If not, the subflow will stay half closed, and would even continue to be used until the end of the MPTCP connection or a reset from the network. The issue has not been seen before, probably because the in-kernel path-manager always sends a RM_ADDR before closing the subflow. Upon the reception of this RM_ADDR, the other peer will initiate the closure on its side as well. On the other hand, if the RM_ADDR is lost, or if the path-manager of the other peer only closes the subflow without sending a RM_ADDR, the subflow would switch to TCP_CLOSE_WAIT, but that's it, leaving the subflow half-closed. So now, when the subflow switches to the TCP_CLOSE_WAIT state, and if the MPTCP connection has not been closed before with a DATA_FIN, the kernel owning the subflow schedules its worker to initiate the closure on its side as well. This issue can be easily reproduced with packetdrill, as visible in [1], by creating an additional subflow, injecting a FIN+ACK before sending the DATA_FIN, and expecting a FIN+ACK in return. Fixes: 40947e1 ("mptcp: schedule worker when subflow is closed") Cc: stable@vger.kernel.org Link: multipath-tcp/packetdrill#154 [1] Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Link: https://patch.msgid.link/20240826-net-mptcp-close-extra-sf-fin-v1-1-905199fe1172@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
1 parent bac76cf commit f09b0ad

2 files changed

Lines changed: 10 additions & 3 deletions

File tree

net/mptcp/protocol.c

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2533,8 +2533,11 @@ static void __mptcp_close_subflow(struct sock *sk)
25332533

25342534
mptcp_for_each_subflow_safe(msk, subflow, tmp) {
25352535
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
2536+
int ssk_state = inet_sk_state_load(ssk);
25362537

2537-
if (inet_sk_state_load(ssk) != TCP_CLOSE)
2538+
if (ssk_state != TCP_CLOSE &&
2539+
(ssk_state != TCP_CLOSE_WAIT ||
2540+
inet_sk_state_load(sk) != TCP_ESTABLISHED))
25382541
continue;
25392542

25402543
/* 'subflow_data_ready' will re-sched once rx queue is empty */

net/mptcp/subflow.c

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1255,12 +1255,16 @@ static void mptcp_subflow_discard_data(struct sock *ssk, struct sk_buff *skb,
12551255
/* sched mptcp worker to remove the subflow if no more data is pending */
12561256
static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk)
12571257
{
1258-
if (likely(ssk->sk_state != TCP_CLOSE))
1258+
struct sock *sk = (struct sock *)msk;
1259+
1260+
if (likely(ssk->sk_state != TCP_CLOSE &&
1261+
(ssk->sk_state != TCP_CLOSE_WAIT ||
1262+
inet_sk_state_load(sk) != TCP_ESTABLISHED)))
12591263
return;
12601264

12611265
if (skb_queue_empty(&ssk->sk_receive_queue) &&
12621266
!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags))
1263-
mptcp_schedule_work((struct sock *)msk);
1267+
mptcp_schedule_work(sk);
12641268
}
12651269

12661270
static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)

0 commit comments

Comments
 (0)