You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@aymeric-roucher Indeed this happens when you use the ToolCallingAgent with a LiteLLM model. The problem is not limited to ollama, it also happens with Claude via LiteLLM. The CodeAgent works fine with the same code and tools and the same LiteLLM models. Since the ToolCallingAgent works fine with the HfApi models and tool calling mechanism is the same in both model classes as far as I can see, maybe LiteLLM have changed something which needs to be adapted on the smolagents side, assuing it worked before. Tested with 1.4.1
Is anyone already working on this or should I investigate?
1 week ago, ToolCallingAgent worked for me via LiteLLM for Claude-3.5 sonnet and GPT-4o, and 2 weeks ago with ollama running llama 3.2.
Indeed maybe fixing the prompts could help @RolandJAAI !
I tried to follow this tutorial with ollama models:
https://huggingface.co/docs/smolagents/tutorials/inspect_runs
This the error tracked by phoenix:
This is what is happening in transformation.py arround line 264:
Lite LLM tries to use keys 'name' and 'argument' unfortunately 'function_call ' contains response:
Which does not match the keys. I had a look at smollagents prompts.py, which has examples like:
So please fix the example tools, to match the LiteLLM 'litellm/llms/ollama/completion/transformation.py' implementation.
The text was updated successfully, but these errors were encountered: