Skip to content

Lobby update fixes and null-card guard in CMatchUI.updateCards#10618

Open
autumnmyst wants to merge 1 commit intoCard-Forge:masterfrom
autumnmyst:lobby-fixes
Open

Lobby update fixes and null-card guard in CMatchUI.updateCards#10618
autumnmyst wants to merge 1 commit intoCard-Forge:masterfrom
autumnmyst:lobby-fixes

Conversation

@autumnmyst
Copy link
Copy Markdown
Contributor

@autumnmyst autumnmyst commented May 6, 2026

  • LobbySlot.apply: refactor the per-field if (data.getX() != null) { setX(...); changed = true; } cascade into setIfChanged / setIntIfChanged helpers that skip the write (and the changed flag) when the incoming value already matches the current one. Also routes SchemeDeckName, AvatarVanguard, PlanarDeckName, and DeckName through the same path; these were previously updated but never flipped changed, so listeners weren't notified.
  • NetConnectUtil: drop the explicit server.updateLobbyState() call from the player-change listener. server.updateSlot() already routes through updateLobbyState() via the IUpdateable listener installed above, so the explicit call broadcast a duplicate LobbyUpdateEvent to every client on every slot change.
  • FServerManager.broadcastReadyState: dedupe by slot index, tracking the last broadcast value per slot. Without this, UpdateLobbyPlayerEvents with the same getReady() value (which fire routinely during normal lobby flow) repeated the chat line "Player X is ready" multiple times in a row.
  • CMatchUI.updateCards: change return -> continue on null zone, and add a null check on the CardView itself. Null can flow in from a remote-side event whose IdRef failed to resolve in the tracker; without these guards the loop bailed out and dropped every later card in the same update batch from the visual update.

…Cards

- LobbySlot.apply: refactor per-field cascade into setIfChanged /
  setIntIfChanged helpers; skip the change flag when value matches
  current. Also routes SchemeDeckName, AvatarVanguard, PlanarDeckName,
  and DeckName through the changed-flag path (previously dropped).
- NetConnectUtil: drop redundant server.updateLobbyState() call from
  the player-change listener; updateSlot already triggers it via the
  IUpdateable listener above, so the explicit call duplicated every
  LobbyUpdateEvent broadcast.
- FServerManager.broadcastReadyState: dedupe by slot index so the
  ready-state chat line fires once per transition, not on every
  UpdateLobbyPlayerEvent that carries a getReady() value.
- CMatchUI.updateCards: change return -> continue on null zone, plus
  null-check the CardView. A transient-null card in the middle of an
  update batch no longer drops the rest of the batch from the visual
  update.
@MostCromulent
Copy link
Copy Markdown
Contributor

MostCromulent commented May 6, 2026

Null can flow in from a remote-side event whose IdRef failed to resolve in the tracker; without these guards the loop bailed out and dropped every later card in the same update batch from the visual update.

I assume this is based on a bug you experienced. Any chance you can post a copy of your network logs from when this happened?

What did it manifest as in terms of visual glitch / how does it manifest with your fix in place?

In terms of root cause we can probably tolerate a transient visual glitch if it self corrects on next network packet, but if it's resulting in more serious visual desync will need further investigation.

});

private final Map<Integer, RemoteClientGuiGame> playerGuis = new ConcurrentHashMap<>(); // Store RemoteClientGuiGame instances for reuse
private final Map<Integer, Boolean> lastBroadcastReadyState = new ConcurrentHashMap<>(); // dedupe ready-state chat broadcasts
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can't just cache all slots without refreshing in case client leaves when ready and then comes back or when returning after game end
otherwise some messages will get lost instead

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, let me address that

Copy link
Copy Markdown
Contributor Author

@autumnmyst autumnmyst May 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically we want to rebroadcast even if the state doesn’t change for:

  1. Client leaves and joins again
  2. Game completes, client in lobby

So clear cache on client leave / game completion

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if that's really the right approach though...
would probably be better to refactor it so only isReadyUpdate carries the flag

@autumnmyst
Copy link
Copy Markdown
Contributor Author

autumnmyst commented May 6, 2026

Null can flow in from a remote-side event whose IdRef failed to resolve in the tracker; without these guards the loop bailed out and dropped every later card in the same update batch from the visual update.

I assume this is based on a bug you experienced. Any chance you can post a copy of your network logs from when this happened?

What did it manifest as in terms of visual glitch / how does it manifest with your fix in place?

In terms of root cause we can probably tolerate a transient visual glitch if it self corrects on next network packet, but if it's resulting in more serious visual desync will need further investigation.

I came across this issue from looking logs from intermittent crashes on specifically the remote client in online games. Basically if for whatever reason the IdRef fails to resolve, remote client gets multiple NPEs while trying to perform a visual update, so this skips over them instead of always trying to get the zone if CardView is null and stopping iteration if zone is null

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants