Autogen with Huggingface Llama 3.1 70B does not work with tool calling. Autogen Studio works however. #3503
Unanswered
redpanda77
asked this question in
Q&A
Replies: 1 comment 1 reply
-
@redpanda77 did you find a workaround to use huggingface with autogen |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm running code with two agents. Proxy Agent and an agent with a tool to write text files. When using Llama 3.1 70B and the python code I get an error related to the use of the tool. When used exactly the same code for the tool/skill in Autogen Studio it worked perfectly.
On jupyter I get this error:
***** Response from calling tool (0) *****
Error: Expecting value: line 1 column 1 (char 0)
The argument must be in JSON format.
I tried the same python code with the gpt-4-turbo model instead of llama 3.1 70B and it runs perfectly.
Why does this happen? What's the difference between Autogen library imports to a python file and autogen studio?
Beta Was this translation helpful? Give feedback.
All reactions