Skip to content

🤖 feat: incremental workspace.onChat subscription (cursor + live-only)#2413

Open
ThomasK33 wants to merge 20 commits intomainfrom
workspace-sync-mc0q
Open

🤖 feat: incremental workspace.onChat subscription (cursor + live-only)#2413
ThomasK33 wants to merge 20 commits intomainfrom
workspace-sync-mc0q

Conversation

@ThomasK33
Copy link
Member

Summary

Add incremental workspace.onChat subscription support so clients can reconnect with a cursor instead of replaying full history. This eliminates redundant data transfer after transient disconnects and enables live-only subscriptions.

Background

Today workspace.onChat always does a full history replay on subscribe/reconnect — re-sending all messages from chat.jsonl, replaying active streams from scratch, and replaying init state. For browser/remote connections, this resends a lot of redundant data after transient disconnects. This PR adds three subscription modes:

  1. full (default): current behavior, unchanged.
  2. since (with cursor): replay only messages newer than the client's cursor, plus incremental stream replay filtered by timestamp.
  3. live: skip all history replay; receive only future events.

Implementation

Shared schemas (src/common/orpc/schemas/stream.ts, api.ts, types.ts)

  • New OnChatHistoryCursorSchema, OnChatStreamCursorSchema, OnChatCursorSchema, OnChatModeSchema
  • CaughtUpMessageSchema extended with optional replay (enum) and cursor fields for backward compatibility
  • workspace.onChat.input now accepts optional mode parameter

Backend (router.ts, agentSession.ts, streamManager.ts, aiService.ts)

  • Router threads input.mode to session.replayHistory()
  • AgentSession.emitHistoricalEvents() implements mode-aware replay:
    • live: skips all history, stream, and init replay
    • since: validates cursor against persisted history (id + historySequence), filters messages, and passes afterTimestamp for incremental stream replay
    • Invalid cursors fall back to full replay
  • caught-up event now always includes { replay, cursor } — the server declares which strategy it actually used
  • StreamManager.replayStream() accepts optional afterTimestamp to filter stream parts while always emitting stream-start
  • Init replay only runs for replay === "full"

Frontend (StreamingMessageAggregator.ts, WorkspaceStore.ts)

  • StreamingMessageAggregator:
    • New getOnChatCursor() method computes cursor from max historySequence + active stream state
    • loadHistoricalMessages() supports { mode: "append" } for incremental loads (skips clearing)
    • handleStreamStart() is replay-idempotent: won't wipe existing parts when data.replay === true
  • WorkspaceStore:
    • runOnChatSubscription() computes cursor from aggregator state on reconnect
    • handleChatMessage() caught-up handler uses server's replay field to decide replace vs. append
    • Replaces pendingReplayReset pattern with mode-aware reset timing
    • Clears stale active streams when server reports no stream cursor

Validation

  • make static-check passes (typecheck, lint, fmt)
  • make test: 3788 pass, 4 fail (all pre-existing: 2 PolicyContext timeouts, 1 flaky ProjectAddForm, 1 error)
  • WorkspaceStore test suite: 26/26 pass

Risks

  • Low: Default behavior (mode=undefined → full replay) is preserved exactly; incremental modes are opt-in from the frontend.
  • Medium: Cursor validation logic — invalid/stale cursors always fall back to full replay, so worst case is unnecessary full replays, not data loss.
  • Low: CaughtUpMessageSchema changes are backward-compatible (new fields are optional).

📋 Implementation Plan

Plan: Incremental workspace.onChat subscription (cursor + live-only)

Context / Why

Today workspace.onChat always does a full history replay on subscribe/reconnect (messages from chat.jsonl + optional stream replay + init replay), and the frontend resets its chat state to accept that replay.

For browser/remote connections this can resend a lot of redundant data after transient disconnects. The goal is to support:

  1. Reconnect-with-cursor: client provides “what I’ve already seen”; backend sends only what’s newer.
  2. Live-only subscribe: client can opt into new events only (no history replay).

Key constraints:

  • Must preserve correctness (no duplicate/garbled streams, no stuck “loading workspace…” state).
  • Must support fallback-to-full-replay when cursors are invalid/stale.
  • Keep the ORPC async-generator subscription pattern intact.

Evidence (current code)

  • Backend subscription handler: src/node/orpc/router.ts workspace.onChat (async generator; sets up relay + calls session.replayHistory(); sends heartbeats).
  • Backend replay: src/node/services/agentSession.ts replayHistory() / emitHistoricalEvents() (reads getHistoryFromLatestBoundary(skip=1), optionally calls aiService.replayStream(), replays init, always emits {type:"caught-up"}).
  • Stream replay: src/node/services/streamManager.ts replayStream() emits stream-start + replays parts with replay: true.
  • Relay/dedupe during replay: src/node/orpc/replayBufferedStreamMessageRelay.ts (buffers live deltas during replay; dedupes by [type, messageId, timestamp, delta]).
  • Frontend subscription loop: src/browser/stores/WorkspaceStore.ts runOnChatSubscription() + resetChatStateForReplay().
  • Frontend caught-up barrier + batch load: WorkspaceStore.handleChatMessage() buffers until caught-up, then calls StreamingMessageAggregator.loadHistoricalMessages().
  • Aggregator behavior:
    • StreamingMessageAggregator.loadHistoricalMessages() currently clears all state.
    • StreamingMessageAggregator.handleStreamStart() currently overwrites the streaming message with empty parts, even during replay.

Proposed API / contract

1) Extend workspace.onChat input

Add a mode discriminant. Default remains full replay.

// src/common/orpc/schemas/api.ts
onChat.input = z.object({
  workspaceId: z.string(),
  mode: OnChatModeSchema.optional(),
});

// shared schema (recommend placing in stream.ts so CaughtUp can reuse)
const OnChatHistoryCursorSchema = z.object({
  messageId: z.string(),
  historySequence: z.number(),
});

const OnChatStreamCursorSchema = z.object({
  messageId: z.string(),
  lastTimestamp: z.number(),
});

const OnChatCursorSchema = z.object({
  history: OnChatHistoryCursorSchema.optional(),
  stream: OnChatStreamCursorSchema.optional(),
});

const OnChatModeSchema = z.discriminatedUnion("type", [
  z.object({ type: z.literal("full") }),
  z.object({ type: z.literal("since"), cursor: OnChatCursorSchema }),
  z.object({ type: z.literal("live") }),
]);

Semantics:

  • full (default): current behavior.
  • since: replay only messages newer than the provided cursor (plus replay stream incrementally if possible).
  • live: do not replay persisted history; only future events. (Optionally: if a stream is currently active, server may emit a stream-start snapshot but no prior deltas — see Backend step 5.)

2) Extend caught-up event to carry replay info + new cursor

We need the server to tell the client whether it honored since or fell back to full replay.

// src/common/orpc/schemas/stream.ts
export const CaughtUpMessageSchema = z.object({
  type: z.literal("caught-up"),
  replay: z.enum(["full", "since", "live"]),
  cursor: OnChatCursorSchema.optional(),
});

Rules:

  • Server must always emit caught-up (existing invariant).
  • replay reflects the actual replay strategy used (even if since was requested).
  • cursor represents the server’s best “current” cursor at the end of replay.
    • cursor.stream should be present only when the server believes a stream is currently active.

Recommended approach (with phases)

Approach A (MVP): history cursor + live-only, keep stream replay full

  • Implement mode=live.
  • Implement mode=since for persisted messages only.
  • If an active stream exists, backend behaves like full (or frontend requests full).

Net LoC estimate (product code): ~250–400

Approach B (recommended): history cursor + stream timestamp cursor (full incremental reconnect)

Includes everything in Approach A, plus:

  • Client supplies cursor.stream.lastTimestamp.
  • Backend replays only stream parts with part.timestamp > lastTimestamp.
  • Frontend makes handleStreamStart(replay=true) idempotent (no wiping existing parts).

Net LoC estimate (product code): ~450–700

The rest of this plan details Approach B. (Approach A is a subset by skipping steps marked “(B-only)”.)


Implementation details

Backend

1) Schemas

  • Edit src/common/orpc/schemas/stream.ts
    • Define OnChatHistoryCursorSchema, OnChatStreamCursorSchema, OnChatCursorSchema.
    • Update CaughtUpMessageSchema to include { replay, cursor? }.
  • Edit src/common/orpc/schemas/api.ts
    • Update workspace.onChat.input to accept { mode?: OnChatMode }.
    • Export OnChatModeSchema (or keep private and infer types via schemas).
  • Touch src/common/orpc/types.ts
    • No manual type edits required (types are inferred), but update any code that assumes caught-up has only {type}.

2) ORPC router: thread mode through

  • Edit src/node/orpc/router.ts workspace.onChat handler
    • Pass input.mode through to session.replayHistory(...).
    • Keep createReplayBufferedStreamMessageRelay + heartbeat logic intact.

Pseudo-shape:

await session.replayHistory(
  ({ message }) => push(message),
  input.mode
);

3) AgentSession: implement replay modes + cursor validation

  • Edit src/node/services/agentSession.ts
    • Change replayHistory(listener)replayHistory(listener, mode?: OnChatMode).
    • Change emitHistoricalEvents(listener)emitHistoricalEvents(listener, mode?: OnChatMode).

Core logic outline:

private async emitHistoricalEvents(listener, mode) {
  let actualReplay: "full" | "since" | "live" = "full";
  let serverCursor: OnChatCursor | undefined;

  try {
    if (mode?.type === "live") {
      actualReplay = "live";
      // Useful live-only: if a stream is active, emit stream-start but no prior parts via
      // replayStream({ afterTimestamp: lastPartTimestamp(streamInfo.parts) }).
      return;
    }

    const streamInfo = this.aiService.getStreamInfo(this.workspaceId);
    const partial = await this.historyService.readPartial(this.workspaceId);

    const historyResult = await this.historyService.getHistoryFromLatestBoundary(this.workspaceId, 1);
    if (historyResult.success) {
      const history = historyResult.data;
      const { shouldFilter, startSeq } = validateSinceCursor(mode, history);
      actualReplay = shouldFilter ? "since" : "full";

      for (const msg of history) {
        if (partialSeqMatchesPlaceholder(partial, msg)) continue;
        if (actualReplay === "since" && msg.metadata?.historySequence !== undefined) {
          if (msg.metadata.historySequence <= startSeq) continue;
        }
        listener({ workspaceId, message: { ...msg, type: "message" } });
      }

      // cursor.history = max(historySequence) messageId from *full history list* (not filtered)
      serverCursor = buildHistoryCursor(history, partial, streamInfo);
    }

    // Stream replay:
    if (streamInfo) {
      const afterTimestamp =
        mode?.type === "since" && mode.cursor.stream?.messageId === streamInfo.messageId
          ? mode.cursor.stream.lastTimestamp
          : undefined;

      await this.aiService.replayStream(this.workspaceId, { afterTimestamp });

      serverCursor = {
        ...serverCursor,
        stream: {
          messageId: streamInfo.messageId,
          lastTimestamp: lastPartTimestamp(streamInfo.parts) ?? 0,
        },
      };
    } else if (partial) {
      listener({ workspaceId, message: { ...partial, type: "message" } });
    }

    // Init replay:
    // ✅ Only do this when actualReplay === "full" (avoid duplicate init output in incremental reconnect)
    if (actualReplay === "full") {
      await this.initStateManager.replayInit(this.workspaceId);
    }
  } catch (e) {
    log.error(...);
  } finally {
    listener({ workspaceId, message: { type: "caught-up", replay: actualReplay, cursor: serverCursor } });
  }
}

Validation rules (defensive):

  • If mode.type !== "since" → full.
  • If since.cursor.history missing → fallback to full (we can’t safely filter).
  • Validate history cursor by finding messageId and confirming historySequence matches.
    • If mismatch / message missing → fallback to full.
  • (Optional) if cursor seq is greater than max seq in history → treat as full (cursor is from different universe).

This keeps “crash and burn” logic localized to replay only; it should never crash the app.

4) (B-only) Stream replay filtering by timestamp

  • Edit src/node/services/aiService.ts + src/node/services/streamManager.ts
    • Add optional parameter to replayStream(workspaceId, opts?: { afterTimestamp?: number }).
    • Implement filtering in StreamManager.replayStream():
const after = opts?.afterTimestamp;
const replayParts = streamInfo.parts.slice();
const filtered = after != null ? replayParts.filter(p => p.timestamp > after) : replayParts;

this.emitStreamStart(..., { replay: true });
for (const part of filtered) await this.emitPartAsEvent(..., { replay: true });

Also:

  • If a since cursor includes cursor.stream.messageId and it doesn’t match streamInfo.messageId, ignore afterTimestamp and replay full stream (client cursor is for a different stream).

5) Live-only mode behavior

Decide one of:

  • Strict live-only: no history, no init, no stream snapshot; just caught-up(replay:"live").
  • Useful live-only (recommended): if a stream is currently active, emit stream-start(replay:true) but no prior deltas so the subscriber can receive subsequent live deltas without dropping them.

(Plan assumes “useful live-only”.)


Frontend

6) Add a reconnect cursor and pass it on resubscribe

  • Edit src/browser/stores/WorkspaceStore.ts runOnChatSubscription()
    • Replace the current “always reset on retry” behavior.
    • Before subscribing, compute a cursor from the aggregator:
      • history: latest { messageId, historySequence } from aggregator.getAllMessages().
      • stream: if aggregator has an active stream, use its lastServerTimestamp and messageId.

Implementation shape:

const cursor = aggregator.getOnChatCursor();
const mode = cursor ? { type: "since", cursor } : undefined; // undefined => full
const iterator = await client.workspace.onChat({ workspaceId, mode }, { signal });

Notes:

  • Only send since when we already have local state (e.g., transient.caughtUp === true and aggregator.hasMessages()), otherwise request full.

7) Stop pre-emptively clearing state on reconnect; clear only when server says “full”

  • Edit WorkspaceStore:
    • Remove or significantly narrow pendingReplayReset usage.
    • Keep resetChatStateForReplay() as a utility, but trigger it based on caught-up.replay === "full".

Concrete change:

  • In runOnChatSubscription remove:
    • if (this.pendingReplayReset.delete(workspaceId)) this.resetChatStateForReplay(workspaceId);
    • this.pendingReplayReset.add(workspaceId); in the retry path

Instead, in handleChatMessage when receiving caught-up, decide whether to reset/replace or append.

8) Extend caught-up handling to support append + stream reconciliation

  • Edit WorkspaceStore.handleChatMessage() isCaughtUpMessage branch

New logic:

  1. Read data.replay (default to "full" if missing).
  2. If data.cursor?.stream is absent, call aggregator.clearActiveStreams() to recover from “stream ended while disconnected”.
  3. Load historical messages:
    • replay === "full" → clear + replace (current behavior)
    • replay === "since" → append-only (new)
    • replay === "live" → should be no historical messages; just mark caught up

Pseudo:

const replay = data.replay ?? "full";
if (!data.cursor?.stream) aggregator.clearActiveStreams();

if (transient.historicalMessages.length > 0) {
  aggregator.loadHistoricalMessages(transient.historicalMessages, hasActiveStream, {
    mode: replay === "since" ? "append" : "replace",
  });
}

(See next step for loadHistoricalMessages changes.)

9) Update StreamingMessageAggregator.loadHistoricalMessages to support append mode

  • Edit src/browser/utils/messages/StreamingMessageAggregator.ts

Change signature:

loadHistoricalMessages(
  messages: MuxMessage[],
  hasActiveStream = false,
  opts?: { mode: "replace" | "append" }
): void

Behavior:

  • replace = current behavior.
  • append:
    • Do not clear internal maps/caches.
    • Insert/overwrite incoming messages into this.messages (use this.messages.set(id, msg) to avoid the addMessage “parts length” heuristic).
    • Only rebuild derived state for messages whose historySequence is greater than the prior max sequence.
    • Finally invalidateCache() once.

This keeps the performance win of batch loading without corrupting derived state.

10) (B-only) Make handleStreamStart(replay=true) idempotent

  • Edit StreamingMessageAggregator.handleStreamStart()

Current behavior always overwrites message with empty parts. For incremental replay we must not wipe already-received parts.

Change:

  • If data.replay === true and we already have this.messages.get(data.messageId):
    • Keep the existing message and only update metadata fields that are missing.
    • Still (re-)create the activeStreams context so the UI knows it’s streaming.
  • Otherwise keep current behavior.

Pseudo:

const existing = this.messages.get(data.messageId);
if (data.replay && existing) {
  // Don’t reset parts.
  existing.metadata = { ...existing.metadata, historySequence: data.historySequence, model: data.model, ... };
  this.markMessageDirty(data.messageId);
  return;
}

11) Cursor helper on aggregator

  • Edit StreamingMessageAggregator add:
    • getOnChatCursor(): { history?: {messageId; historySequence}; stream?: {messageId; lastTimestamp} } | undefined

Implementation details:

  • history: pick message with max metadata.historySequence.
  • stream: if activeStreams.size > 0, pick the (only) stream messageId and use its lastServerTimestamp.
    • Assert invariants: only one active stream should exist (per backend). If multiple, return undefined to force full replay.

Tests / validation

Backend tests

Add coverage for:

  1. mode=since filters persisted messages correctly.
  2. Invalid cursor falls back to replay:"full".
  3. (B-only) replayStream({afterTimestamp}) only emits parts after timestamp.
  4. caught-up always emitted with { replay, cursor }.

Good starting points:

  • src/node/services/historyService.test.ts (real file I/O patterns)
  • src/cli/server.test.ts (spins up ORPC server/client; can test subscription end-to-end)

Frontend tests

  • Update/add tests in:
    • src/browser/stores/WorkspaceStore.test.ts for caught-up behavior (replace vs append)
    • src/browser/utils/messages/StreamingMessageAggregator.test.ts for:
      • append mode not clearing messages
      • handleStreamStart(replay=true) not wiping parts

Local validation checklist

  • make typecheck
  • make test
  • (if needed) make lint

Rollout / safety

  • Keep default behavior: when no cursor is supplied, backend stays “full replay”.
  • Frontend should only request since after it has successfully loaded once (caughtUp === true).
  • Backend explicitly signals fallback to full replay via caught-up.replay to keep clients correct.
Why we need server-indicated replay mode (vs always append) If the client’s cursor is stale (history truncated/edited) and the server must send a full replay, append-mode would create duplicates and/or keep stale messages that should have been deleted. Having the server declare `replay:"full"` lets the client reset deterministically.

Generated with mux • Model: anthropic:claude-opus-4-6 • Thinking: xhigh • Cost: $1.88

Move the full-replay reset to after the await for client.workspace.onChat()
to preserve the original timing behavior. The reset must happen after an
async boundary so the UI continues displaying previous state until replay
data starts arriving, and to avoid spurious store emissions during
synchronous workspace setup.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: dad31b6df3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Addressed both review threads and pushed follow-up commit 1aebcf6.
Please re-review.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1aebcf65b1

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Follow-up pushed in 8ac0779:

  • clear stale replay buffers before reconnect retries
  • track tool completion timestamps and replay only completions newer than cursor
  • add replayStream regression test for tool completion filtering

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8ac07793db

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Applied follow-up in bbcb563 so caught-up.cursor.stream.lastTimestamp now advances using the max of:

  • replay part timestamps
  • tracked tool completion timestamps

This keeps reconnect cursors aligned with replayed tool completions.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: bbcb56364d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed ce72f97 to clear stale active stream contexts whenever caught-up reports replay: "full" (even if cursor.stream exists).

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. 🚀

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Follow-up fix in 02d0dde to keep model-only stream test paths compatible:
toolCompletionTimestamps now initializes defensively before set/read.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 02d0dde54f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed 1aaf913.
since mode now requires both:

  • cursor message match (id + historySequence)
  • oldestHistorySequence match against current server history window

If oldest history differs (possible truncation/compaction while disconnected), server falls back to full replay.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1aaf91377d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed 2a148e1:

  • since-mode input now requires a history cursor anchor
  • backend since replay now requires valid history cursor (stream-only no longer eligible)

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2a148e1448

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed f0378f7.
emitHistoricalEvents() now re-reads stream state after replay and only emits caught-up.cursor.stream from the latest backend state.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f0378f722a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed 615cfcb to avoid emitting stale pre-replay partials when a stream replay was attempted and ended during reconnect.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 615cfcb9db

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed d3823c1.
On caught-up with replay: "full", WorkspaceStore now clears transient queued message + live bash/task maps before rebuilding replay state.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d3823c194b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed 1e112f1.
Full-replay fallback now clears live bash/task transient state but preserves queuedMessage so queued prompts remain visible across reconnect.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1e112f1a87

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed 10f238a.
Backend now replays a queued-message snapshot during history replay, and frontend full-replay fallback clears queued/live transient state before hydration.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 10f238a5db

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

Pushed 5abbeca.
Queued-message snapshot replay moved to finally so it is emitted before caught-up even when replay errors.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5abbecae8b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ThomasK33
Copy link
Member Author

@codex review

@ThomasK33
Copy link
Member Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 414bd92d58

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +576 to +578
// Replay init state only for full replay. Incremental reconnects already have init state.
if (replayMode === "full") {
await this.initStateManager.replayInit(this.workspaceId);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Replay init events when reconnecting with since/live mode

Skipping initStateManager.replayInit() for non-full replays means reconnects can miss init state transitions that happened while the client was offline. In particular, if a client disconnects after init-start and the hook finishes before reconnect, StreamingMessageAggregator only clears the running init state when it receives init-end, so the UI can remain stuck showing a stale "running" workspace-init message until a later full replay. This regression is introduced by gating init replay on replayMode === "full".

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant