You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bug Description
Follow up with this #1716
I updated the packages because there was a new release.
For the exact same code, I am encountering the issue:
When attempting to generate content using the litellm library with the gemini-1.5-flash-002 model, an APIConnectionError is raised. The error indicates that the model parameter is None, despite the API response containing the correct model information. This prevents the successful processing of the completion response.
To Reproduce
Which steps should someone take to run into the same error? A small, reproducible code example is useful here.
Expected behavior
A clear and concise description of what you expected to happen.
Relevant Logs/Tracebacks
Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks. If the issue is related to the TruLens dashboard, please also include a screenshot.
INFO:httpx:HTTP Request: POST https://us-central1-aiplatform.googleapis.com/v1/projects/dj-opis-nonprod-ds/locations/us-central1/publishers/google/models/gemini-1.5-flash-002:generateContent "HTTP/1.1 200 OK"
RAW RESPONSE:
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"text": "3\n"
}
]
},
"finishReason": "STOP",
"avgLogprobs": -0.00026624122983776033
}
],
"usageMetadata": {
"promptTokenCount": 746,
"candidatesTokenCount": 2,
"totalTokenCount": 748
},
"modelVersion": "gemini-1.5-flash-002"
}
I downgraded the LiteLLM version to 1.57.8, and the error no longer occurs. It seems LiteLLM has been frequently updating, which might have introduced this issue in the newer versions.
Bug Description
Follow up with this #1716
I updated the packages because there was a new release.
For the exact same code, I am encountering the issue:
When attempting to generate content using the litellm library with the gemini-1.5-flash-002 model, an APIConnectionError is raised. The error indicates that the model parameter is None, despite the API response containing the correct model information. This prevents the successful processing of the completion response.
To Reproduce
Which steps should someone take to run into the same error? A small, reproducible code example is useful here.
Expected behavior
A clear and concise description of what you expected to happen.
Relevant Logs/Tracebacks
Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks. If the issue is related to the TruLens dashboard, please also include a screenshot.
INFO:httpx:HTTP Request: POST https://us-central1-aiplatform.googleapis.com/v1/projects/dj-opis-nonprod-ds/locations/us-central1/publishers/google/models/gemini-1.5-flash-002:generateContent "HTTP/1.1 200 OK"
RAW RESPONSE:
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"text": "3\n"
}
]
},
"finishReason": "STOP",
"avgLogprobs": -0.00026624122983776033
}
],
"usageMetadata": {
"promptTokenCount": 746,
"candidatesTokenCount": 2,
"totalTokenCount": 748
},
"modelVersion": "gemini-1.5-flash-002"
}
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use litellm.set_verbose=True'.
ERROR:trulens.core.feedback.endpoint:LiteLLMEndpoint request failed <class 'litellm.exceptions.APIConnectionError'>=litellm.APIConnectionError: Model is None and does not exist in passed completion_response. Passed completion_response={'id': 'chatcmpl-cb6b82e2-5e3f-40cc-9eeb-36be361cc351', 'created': 1737150730, 'model': 'gemini-1.5-flash-002', 'object': 'chat.completion', 'system_fingerprint': None, 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': '3\n', 'role': 'assistant', 'tool_calls': None, 'function_call': None}}], 'usage': {'completion_tokens': 2, 'prompt_tokens': 746, 'total_tokens': 748, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'vertex_ai_grounding_metadata': [], 'vertex_ai_safety_results': [], 'vertex_ai_citation_metadata': []}, model=None
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/litellm/main.py", line 2290, in completion
model_response = vertex_chat_completion.completion( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/trulens/core/feedback/endpoint.py", line 876, in tru_wrapper
return update_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/trulens/core/feedback/endpoint.py", line 858, in update_response
response_ = endpoint.handle_wrapped_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/trulens/providers/litellm/endpoint.py", line 170, in handle_wrapped_call
self.global_callback.handle_generation(response=response)
File "/opt/conda/lib/python3.11/site-packages/trulens/providers/litellm/endpoint.py", line 60, in handle_generation
setattr(self.cost, "cost", completion_cost(response))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/litellm/cost_calculator.py", line 768, in completion_cost
raise e
File "/opt/conda/lib/python3.11/site-packages/litellm/cost_calculator.py", line 629, in completion_cost
raise ValueError(
ValueError: Model is None and does not exist in passed completion_response. Passed completion_response={'id': 'chatcmpl-cb6b82e2-5e3f-40cc-9eeb-36be361cc351', 'created': 1737150730, 'model': 'gemini-1.5-flash-002', 'object': 'chat.completion', 'system_fingerprint': None, 'choices': [{'finish_reason': 'stop', 'index': 0, 'message': {'content': '3\n', 'role': 'assistant', 'tool_calls': None, 'function_call': None}}], 'usage': {'completion_tokens': 2, 'prompt_tokens': 746, 'total_tokens': 748, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'vertex_ai_grounding_metadata': [], 'vertex_ai_safety_results': [], 'vertex_ai_citation_metadata': []}, model=None
. Retries remaining=3."""
Environment:
"version" : 1.26.0,
"python.version" : 3.11.11,
"python.connector.version" : 3.12.1,
"os.name" : Linux
snowflake-sqlalchemy version: 1.7.3
trulens-eval version: 1.3.2
Additional context
I didn't have this issue before new release though.
The text was updated successfully, but these errors were encountered: