Skip to content

feat: add chat timestamps and normalise LLM output whitespace#56

Open
juman909 wants to merge 1 commit intovirtualcell:mainfrom
juman909:feat/ui-and-response-formatting
Open

feat: add chat timestamps and normalise LLM output whitespace#56
juman909 wants to merge 1 commit intovirtualcell:mainfrom
juman909:feat/ui-and-response-formatting

Conversation

@juman909
Copy link

Summary

  • Frontend (ChatBox.tsx): Renders the timestamp field that was already stored in every Message object but never displayed. Each bubble now shows a locale-aware time string (e.g. 2:34 PM) beneath it, aligned to the bubble's side.
  • Backend (llms_service.py): After the final LLM response is returned in get_response_with_tools(), strips leading/trailing whitespace and collapses runs of 3+ consecutive newlines to a single blank line. This prevents extra spacing that appeared when tool results with excessive blank lines were mirrored into the model output.

Both changes include thorough inline comments explaining the motivation and implementation decisions.

Test plan

  • Start a chat session and verify timestamps appear beneath each bubble
  • Confirm timestamp is right-aligned for user messages, left-aligned for assistant messages
  • Send a query that triggers a tool call and confirm the response has no leading/trailing blank lines
  • Confirm intentional paragraph breaks (2 newlines) are preserved

…ace (backend)

Frontend – ChatBox.tsx:
- Render the already-stored `timestamp` field below each message bubble.
  Previously the field was populated but never displayed, so users had no
  indication of when messages were sent.
- Timestamp is locale-aware (toLocaleTimeString with hour/minute) and
  aligned to match the bubble side (right for user, left for assistant).

Backend – llms_service.py:
- After receiving the final LLM response in get_response_with_tools(),
  strip leading/trailing whitespace and collapse runs of 3+ consecutive
  newlines to a single blank line.
- Fixes inconsistent spacing that appeared when tool results with extra
  blank lines were mirrored into the model's output.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant