You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was using OpenAI then wanted to switch to Gemini. When prompted for the model from my new provider, configure gives the model from the current provider. Seems it's pulling this from `/.config/goose/config.yaml as opposed to pulling the appropriate models for the selected provider.
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Google Gemini
│
● GOOGLE_API_KEY is already configured
│
◇ Would you like to update this value?
│ No
│
◆ Enter a model from that provider:
│ gpt-4o (default)
└
Here's an example when going from Gemini to Ollama
┌ goose-configure
│
◇ What would you like to configure?
│ Configure Providers
│
◇ Which model provider should we use?
│ Ollama
│
◇ Enter a model from that provider:
│ gemini-2.0-flash-exp
│
◐ Checking your configuration... ExecutionError("error sending request for url (http://localhost:11434/v1/chat/completions)")
◇ We could not connect!
│
└ The provider configuration was invalid
The text was updated successfully, but these errors were encountered:
I was using OpenAI then wanted to switch to Gemini. When prompted for the model from my new provider,
configure
gives the model from the current provider. Seems it's pulling this from `/.config/goose/config.yamlas opposed to pulling the appropriate models for the selected provider.Here's an example when going from Gemini to Ollama
The text was updated successfully, but these errors were encountered: