-
Notifications
You must be signed in to change notification settings - Fork 7.1k
feat: tui start shouldn't wait for model to resolve
#8717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a32c06fc58
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
tui start shouldn't wait for model to resolve
|
@codex review this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0f3c232541
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
@codex review this |
|
Codex Review: Didn't find any major issues. Another round soon, please! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
| .await | ||
| .map_err(map_api_error)?; | ||
| let remote_models = tokio::time::timeout( | ||
| Duration::from_secs(5), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit add const
|
|
||
| /// Fetch the latest remote models, using the on-disk cache when still fresh. | ||
| pub async fn refresh_available_models_with_cache(&self, config: &Config) -> CoreResult<()> { | ||
| let _refresh_guard = self.refresh_lock.lock().await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why isn't this lock in the same place we do the network call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's purpose is to avoid multiple parallel network calls?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to attempt a refresh after we have cache as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but that's already handled with if self.try_load_cache().await { no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if two instances hit refresh_available_models_with_cache at time 0 and 1.
Then, both won't find cache.
both will do a call.
both will update cache
What I want is, if one is already updating, wait then read the cache because it will exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can do double check lock (https://en.wikipedia.org/wiki/Double-checked_locking)
| .await?, | ||
| StartupModelMigrationAction::Exit | ||
| ) { | ||
| return Ok(AppExitInfo { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When do we even exit in migration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ctrl+c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack. Consider AppExitInfo::default()
| feedback: feedback.clone(), | ||
| is_first_run, | ||
| model: model.clone(), | ||
| // The only truthful model is the one we get back on SessionConfigured. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this field be Optional then? Feels a bit risky just to put in an empty string.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do that
| auth_manager: auth_manager.clone(), | ||
| config, | ||
| current_model: model.clone(), | ||
| current_model: String::new(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this still used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can remove it
| } else { | ||
| // If the user queued messages while startup was still in progress, kick off the first | ||
| // turn now that we know the session is configured. | ||
| self.maybe_send_next_queued_input(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how does MCP loading work without this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The behavior was submitting the user message. I assumed it's working fine with the mcp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MCP has startup status indicator and also queues user messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so why were we sending messages on session configured here @_@. will investigate that
| // model with an empty string during startup; that would propagate to core and render as a | ||
| // blank model in the session header (/model current label, etc). | ||
| if !model.is_empty() { | ||
| config.model = Some(model.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we ever update this? Shouldn't the model be based on config already? or is it for followup sessions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are also passing the config around and I think we mutate it when model changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's for new sessions. I would love to get rid off config from the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we update the config that we pass around?
| }); | ||
|
|
||
| let Some(current_model) = current_model else { | ||
| clear_pending_model_migration_notice(config); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we simplify 5 copies of?
clear_pending_model_migration_notice
return
?
Uh oh!
There was an error while loading. Please reload this page.