Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]APIConnectionError: "Model is None" Despite Model Being Present in Completion Response for Gemini-1.5-Flash-002 #1735

Closed
HannaHUp opened this issue Jan 17, 2025 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@HannaHUp
Copy link

Bug Description
Follow up with this #1716
I updated the packages because there was a new release.
For the exact same code, I am encountering the issue:

When attempting to generate content using the litellm library with the gemini-1.5-flash-002 model, an APIConnectionError is raised. The error indicates that the model parameter is None, despite the API response containing the correct model information. This prevents the successful processing of the completion response.

To Reproduce
Which steps should someone take to run into the same error? A small, reproducible code example is useful here.

Expected behavior
A clear and concise description of what you expected to happen.

Relevant Logs/Tracebacks
Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks. If the issue is related to the TruLens dashboard, please also include a screenshot.
INFO:httpx:HTTP Request: POST https://us-central1-aiplatform.googleapis.com/v1/projects/dj-opis-nonprod-ds/locations/us-central1/publishers/google/models/gemini-1.5-flash-002:generateContent "HTTP/1.1 200 OK"
RAW RESPONSE:
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"text": "3\n"
}
]
},
"finishReason": "STOP",
"avgLogprobs": -0.00026624122983776033
}
],
"usageMetadata": {
"promptTokenCount": 746,
"candidatesTokenCount": 2,
"totalTokenCount": 748
},
"modelVersion": "gemini-1.5-flash-002"
}

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use litellm.set_verbose=True'.

ERROR:trulens.core.feedback.endpoint:LiteLLMEndpoint request failed <class 'litellm.exceptions.APIConnectionError'>=litellm.APIConnectionError: Model is None and does not exist in passed completion_response. Passed completion_response={'id': 'chatcmpl-cb6b82e2-5e3f-40cc-9eeb-36be361cc351', 'created': 1737150730, 'model': 'gemini-1.5-flash-002', 'object': 'chat.completion', 'system_fingerprint': None, 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': '3\n', 'role': 'assistant', 'tool_calls': None, 'function_call': None}}], 'usage': {'completion_tokens': 2, 'prompt_tokens': 746, 'total_tokens': 748, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'vertex_ai_grounding_metadata': [], 'vertex_ai_safety_results': [], 'vertex_ai_citation_metadata': []}, model=None
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/litellm/main.py", line 2290, in completion
model_response = vertex_chat_completion.completion( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/trulens/core/feedback/endpoint.py", line 876, in tru_wrapper
return update_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/trulens/core/feedback/endpoint.py", line 858, in update_response
response_ = endpoint.handle_wrapped_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/trulens/providers/litellm/endpoint.py", line 170, in handle_wrapped_call
self.global_callback.handle_generation(response=response)
File "/opt/conda/lib/python3.11/site-packages/trulens/providers/litellm/endpoint.py", line 60, in handle_generation
setattr(self.cost, "cost", completion_cost(response))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/litellm/cost_calculator.py", line 768, in completion_cost
raise e
File "/opt/conda/lib/python3.11/site-packages/litellm/cost_calculator.py", line 629, in completion_cost
raise ValueError(
ValueError: Model is None and does not exist in passed completion_response. Passed completion_response={'id': 'chatcmpl-cb6b82e2-5e3f-40cc-9eeb-36be361cc351', 'created': 1737150730, 'model': 'gemini-1.5-flash-002', 'object': 'chat.completion', 'system_fingerprint': None, 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': '3\n', 'role': 'assistant', 'tool_calls': None, 'function_call': None}}], 'usage': {'completion_tokens': 2, 'prompt_tokens': 746, 'total_tokens': 748, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'vertex_ai_grounding_metadata': [], 'vertex_ai_safety_results': [], 'vertex_ai_citation_metadata': []}, model=None
. Retries remaining=3."""

Environment:

"version" : 1.26.0,
"python.version" : 3.11.11,
"python.connector.version" : 3.12.1,
"os.name" : Linux
snowflake-sqlalchemy version: 1.7.3
trulens-eval version: 1.3.2

Additional context
I didn't have this issue before new release though.

@HannaHUp HannaHUp added the bug Something isn't working label Jan 17, 2025
@HannaHUp
Copy link
Author

I downgraded the LiteLLM version to 1.57.8, and the error no longer occurs. It seems LiteLLM has been frequently updating, which might have introduced this issue in the newer versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants