Skip to content

Conversation

@aibrahim-oai
Copy link
Collaborator

@aibrahim-oai aibrahim-oai commented Jan 4, 2026

  • Gate certain operations (send message and /model) behind session configured
  • Make the model migration schedule based we evaluate after session configured and schedule for the next tui run
  • Apply changes in tui2

@aibrahim-oai aibrahim-oai changed the title async async tui start Jan 5, 2026
@aibrahim-oai aibrahim-oai marked this pull request as ready for review January 5, 2026 20:21
Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a32c06fc58

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@aibrahim-oai aibrahim-oai changed the title async tui start feat: tui start shouldn't wait for model to resolve Jan 5, 2026
@aibrahim-oai
Copy link
Collaborator Author

@codex review this

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0f3c232541

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@aibrahim-oai
Copy link
Collaborator Author

@codex review this

@chatgpt-codex-connector
Copy link
Contributor

Codex Review: Didn't find any major issues. Another round soon, please!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

.await
.map_err(map_api_error)?;
let remote_models = tokio::time::timeout(
Duration::from_secs(5),
Copy link
Collaborator

@pakrym-oai pakrym-oai Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit add const


/// Fetch the latest remote models, using the on-disk cache when still fresh.
pub async fn refresh_available_models_with_cache(&self, config: &Config) -> CoreResult<()> {
let _refresh_guard = self.refresh_lock.lock().await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't this lock in the same place we do the network call?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's purpose is to avoid multiple parallel network calls?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't want to attempt a refresh after we have cache as well

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but that's already handled with if self.try_load_cache().await { no?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if two instances hit refresh_available_models_with_cache at time 0 and 1.

Then, both won't find cache.
both will do a call.
both will update cache

What I want is, if one is already updating, wait then read the cache because it will exist.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.await?,
StartupModelMigrationAction::Exit
) {
return Ok(AppExitInfo {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When do we even exit in migration?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ctrl+c

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack. Consider AppExitInfo::default()

feedback: feedback.clone(),
is_first_run,
model: model.clone(),
// The only truthful model is the one we get back on SessionConfigured.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this field be Optional then? Feels a bit risky just to put in an empty string.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do that

auth_manager: auth_manager.clone(),
config,
current_model: model.clone(),
current_model: String::new(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this still used?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can remove it

} else {
// If the user queued messages while startup was still in progress, kick off the first
// turn now that we know the session is configured.
self.maybe_send_next_queued_input();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does MCP loading work without this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior was submitting the user message. I assumed it's working fine with the mcp

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MCP has startup status indicator and also queues user messages.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so why were we sending messages on session configured here @_@. will investigate that

// model with an empty string during startup; that would propagate to core and render as a
// blank model in the session header (/model current label, etc).
if !model.is_empty() {
config.model = Some(model.clone());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we ever update this? Shouldn't the model be based on config already? or is it for followup sessions?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are also passing the config around and I think we mutate it when model changes.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's for new sessions. I would love to get rid off config from the client.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we update the config that we pass around?

});

let Some(current_model) = current_model else {
clear_pending_model_migration_notice(config);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we simplify 5 copies of?

clear_pending_model_migration_notice
return

?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants