🤖 perf: speed up workspace-open transcript paint#2426
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 49c9a72a39
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Tighten the workspace-open critical path by reducing message-row recomputation, cutting unnecessary live subscriptions, and deferring non-critical usage updates. Also fix dist/e2e startup by replacing desktop alias imports that were emitted as unresolved runtime requires. --- _Generated with `mux` • Model: `openai:gpt-5.3-codex` • Thinking: `xhigh` • Cost: `$4.95`_ <!-- mux-attribution: model=openai:gpt-5.3-codex thinking=xhigh costs=4.95 -->
Keep latest-streaming bash subscription active for expanded rows so auto-expanded completed rows still collapse when a newer bash starts. This avoids a regression introduced by over-scoping live subscription guards. --- _Generated with `mux` • Model: `openai:gpt-5.3-codex` • Thinking: `xhigh` • Cost: `$4.95`_ <!-- mux-attribution: model=openai:gpt-5.3-codex thinking=xhigh costs=4.95 -->
Fix a regression where completed reasoning blocks immediately re-collapsed after clicking expand. Auto-collapse now only triggers on streaming->completed transitions. Also add a lightweight summary parser for leading markdown bold segments (`**...**`) so common OpenAI reasoning headers keep their emphasis without bringing back full markdown rendering on collapsed rows. --- _Generated with `mux` • Model: `openai:gpt-5.3-codex` • Thinking: `xhigh` • Cost: `$4.95`_ <!-- mux-attribution: model=openai:gpt-5.3-codex thinking=xhigh costs=4.95 -->
d4e6120 to
4d2c6ad
Compare
|
@codex review |
|
Codex Review: Didn't find any major issues. 🚀 ℹ️ About Codex in GitHubCodex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback". |
c12821e to
7634bbc
Compare
Summary
Follow-up perf pass for workspace-open rendering that reduces critical-path work in the transcript UI and message utilities. This improves startup paint times for loaded workspaces, especially on long tool-heavy histories.
Background
PR #2397 added automated perf profiling for workspace-open flows. Those artifacts showed recurring renderer-side hot spots during initial transcript paint (message-row prop churn, per-row grouping scans, and unnecessary live subscriptions after completion).
Implementation
bash_outputgrouping once per message snapshot (computeBashOutputGroupInfos) instead of per-row scans.taskReportLinkingonly totask/task_awaitrows.userMessageNavigationobjects byhistoryIdso non-message state bumps stop invalidating row props.usageStore.bump()to idle (requestIdleCallbackwith timeout fallback) so initial transcript paint is prioritized.MAX_DISPLAYED_MESSAGESfrom 128 → 64.history-hiddenuntil “Load all”.@/...alias imports insrc/desktop/main.tswith relative imports so dist runs do not emit unresolved aliasrequire()calls.Validation
bun test src/browser/utils/messages/messageUtils.test.ts src/browser/utils/messages/StreamingMessageAggregator.test.ts src/browser/stores/WorkspaceStore.test.tsmake static-checkMUX_E2E_RUN_PERF=1 MUX_PROFILE_REACT=1 MUX_E2E_LOAD_DIST=1 xvfb-run -a bun x playwright test tests/e2e/scenarios/perf.workspaceOpen.spec.ts --project=electron --workers=1Perf wall-time (isolated profile runs)
Risks
history-hiddenby default; users can restore with “Load all”.Generated with
mux• Model:openai:gpt-5.3-codex• Thinking:xhigh• Cost:$4.95