You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Having a Qdrant instance up and running with a collection called test and at least one point within it, following for example this quickstart.
Here is the code to reproduce the error:
fromlangchain_openaiimportAzureOpenAIEmbeddings, AzureChatOpenAIfromlangchain_core.output_parsersimportStrOutputParserfromlangchain_core.runnablesimportRunnablePassthroughfromqdrant_clientimportQdrantClientfromlangchain_core.promptsimportChatPromptTemplate, MessagesPlaceholderfromlangchain_community.vectorstoresimportQdrantfromtrulens.apps.langchainimportTruChainfromtrulens.coreimportFeedback, Select, TruSessionfromtrulens_eval.feedback.providerimportAzureOpenAIfromdotenvimportload_dotenvload_dotenv()
classEvaluator:
""" A class for evaluating a specific RAG (Retrieval-Augmented Generation) chain using the Trulens evaluation framework. Each Evaluator is linked to a specific chain, and evaluation feedback is tailored for that chain. Attributes: _tru (Tru): An instance of the Trulens evaluation engine. _recorder (TruChain): The recorder instance for the specific RAG chain being evaluated. metrics (dict): A dictionary that holds labels, their corresponding feedback metric functions, and the chain parts they apply to. """metrics= {
"Groundedness": {
"metric": "groundedness_measure_with_cot_reasons",
"parts": ["context", "output"]
},
"Answer Relevance": {
"metric": "relevance_with_cot_reasons",
"parts": ["input", "output"]
},
"Context Relevance": {
"metric": "context_relevance_with_cot_reasons",
"parts": ["input", "context"],
}
}
def__init__(self, chain, collection_name, llm):
""" Initializes the Evaluator class with a specific RAG chain. Args: chain (LangChain chain): The chain to be evaluated. collection_name (str): The collection name for identifying the app instance. llm (AzureChatOpenAI): The language model instance providing the deployment name. """self._tru=TruSession()
self._recorder=self._create_chain_recorder(chain, collection_name, llm)
defget_chain_recorder(self):
""" Retrieve the Trulens recorder instance for the current RAG chain. Returns: TruChain: The TruChain object, configured to record the evaluation using the defined feedback functions. """returnself._recorderdef_create_chain_recorder(self, chain, collection_name, llm):
""" Set up the recorder for the RAG chain, identified by the collection name. This method initializes feedback functions to measure groundedness, answer relevance, and context relevance within a retrieval-augmented generation (RAG) chain. The feedback functions are provided by the AzureOpenAI provider. Args: chain (LangChain chain): The chain to be evaluated. collection_name (str): The collection name for identifying the app instance. llm (AzureChatOpenAI): The language model instance providing the deployment name. Returns: TruChain: The TruChain object, configured to record the evaluation using the defined feedback functions. """provider=AzureOpenAI(deployment_name=llm.deployment_name)
chain_parts= {
"context": Select.RecordRets["context"].page_content,
"output": Select.RecordRets["answer"],
"input": Select.RecordRets["input"]
}
feedbacks= []
# Loop over the metrics dictionary and apply feedback based on the defined partsforlabel, metric_infoinself.metrics.items():
feedback_fn=getattr(provider, metric_info["metric"]) # Get the feedback function# Dynamically apply the feedback function to the relevant partsfeedback=Feedback(feedback_fn, name=label)
forpartinmetric_info["parts"]:
feedback=feedback.on(chain_parts[part])
# Handle aggregation if specifiedif"aggregate"inmetric_info:
feedback=feedback.aggregate(metric_info["aggregate"])
feedbacks.append(feedback)
# Create and return the TruChain instancetru_chain=TruChain(chain, app_id=collection_name, feedbacks=feedbacks)
returntru_chain# START INSTANTIATION EXAMPLE OBJECTSqdrant_client=QdrantClient("http://localhost:6333")
collection_name="test"embeddings=AzureOpenAIEmbeddings(deployment="text-embedding-ada-002", chunk_size=1)
vector_store=Qdrant(qdrant_client, collection_name, embeddings)
retriever=vector_store.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
"k": 4,
"score_threshold": 0,
})
llm=AzureChatOpenAI(deployment_name="gpt-4o")
prompt=ChatPromptTemplate.from_messages(
[
("system", "Your task is to assist the user based on the given request and context: {context}"),
MessagesPlaceholder("chat_history"),
("human", 'Task:"""{input}"""'),
]
)
user_input, chat_history="What is the capital of France?", []
input_args= {"input": user_input, "chat_history": [(message.type, message.content) forinteractioninchat_historyformessageininteraction]}
# END INSTANTIATION EXAMPLE OBJECTSdefformat_docs(docs):
return"\n\n".join(doc.page_contentfordocindocs)
# This Runnable takes a dict with keys 'input' and 'context',# formats them into a prompt, and generates a response.rag_chain= (
{
"input": lambdax: x["input"],
"context": lambdax: format_docs(x["context"]), # context"chat_history": lambdax: x["chat_history"], # chat history
}
|prompt|llm|StrOutputParser()
).with_config(run_name="rag_chain")
retrieve_docs_chain= (lambda_: input_args["input"]) |retriever# Below, we chain `.assign` calls. This takes a dict and successively adds keys-- "context" and "answer"--# where the value for each key is determined by a Runnable. The Runnable operates on all keys in the dict.docs_chain=RunnablePassthrough.assign(context=retrieve_docs_chain)
rag_chain=docs_chain|RunnablePassthrough.assign(answer=rag_chain)
# Evaluatorevaluator=Evaluator(rag_chain, collection_name, llm)
withevaluator.get_chain_recorder() asrecorder:
ai_answer=rag_chain.invoke(input=input_args)
Expected behavior
The code should work the same way with or without TruLens because the LCEL chain itself works correctly.
Relevant Logs/Tracebacks
You can take a look at the complete traceback at this GitHub Gist.
Bug Description
A strange error while using the library.
To Reproduce
Prerequisites:
.env
file following the template:test
and at least one point within it, following for example this quickstart.Here is the code to reproduce the error:
Expected behavior
The code should work the same way with or without TruLens because the LCEL chain itself works correctly.
Relevant Logs/Tracebacks
You can take a look at the complete traceback at this GitHub Gist.
Environment:
Additional context
Looks like in
trulens.providers.openai.endpoint.OpenAIEndpoint.handle_wrapped_call:490
model
belongs tobindings.kwargs
but isNone
.The text was updated successfully, but these errors were encountered: