You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use Colang 2.X in my LangChain app for a beta example. I am using LangChain with Azure OpenAI model endpoint and trying to get the Dialog Rails example in the NVIDIA Docs (hello_world_3) example with llm continuation to work following the example here: https://docs.nvidia.com/nemo/guardrails/colang_2/getting_started/dialog-rails.html
However when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:
ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.
I am able to use Colang 1 with success, but unable to use Colang 2 when using it in the LangChain chain. When I use nemoguardrails chat I am able to test out the rail of "hi", however the llm continuation dpesnt seem to work when I try to type in what they have in the example above - akka I am not getting any response back and it just spins.
I have my ./config/main.co file as the following:
import core
import llm
flow main
activate llm continuation
activate greeting
flow greeting
user expressed greeting
bot express greeting
flow user expressed greeting
user said "hi" or user said "hello"
flow bot express greeting
bot say "Hello World! Im working for you!"
within a Jupyter notebook I have the following:
import os
from dotenv import load_dotenv
load_dotenv(override=True)
import nest_asyncio
nest_asyncio.apply()
from langchain_openai import AzureChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import AzureChatOpenAI
from nemoguardrails import RailsConfig, LLMRails
from nemoguardrails.integrations.langchain.runnable_rails import RunnableRails
model = AzureChatOpenAI(
....
# credentials submitted here:
)
output_parser = StrOutputParser()
prompt = ChatPromptTemplate.from_template("{topic}")
chain = prompt | model | output_parser
config = RailsConfig.from_path("./config/")
rails = RunnableRails(config)
chain_with_guardrails = prompt | (rails | model) | output_parser
text = "hi"
chain_with_guardrails.invoke(text)
Steps To Reproduce
Follow printed output above for example set up and jupyter notebook.
Expected Behavior
when trying it in jupyter I could not be getting a error and instead receiving the rails prompt flow indicated in the main.co file.
when testing it in the nemogaurdrail CLI I should be getting a LLM generated response for "how are you", instead of it constantly spinning up new workflows
Actual Behavior
Described above"
However when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:
ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.
Thank you @knitzschke for reporting this problem. Currently there is a gap that some of the options that work with Colang 1.0 are not available with Colang 2.0. We will eventually add the support in the future releases, probably 0.13.0.
Did you check docs and existing issues?
Python version (python --version)
Python 3.11.8
Operating system/version
Windows 11 Enterprise
NeMo-Guardrails version (if you must use a specific version and not the latest
0.11.0
nemoguardrails==0.11.0
langchain==0.3.4
langchain-community==0.3.3
langchain-core==0.3.12
langchain-openai==0.2.3
Describe the bug
I am trying to use Colang 2.X in my LangChain app for a beta example. I am using LangChain with Azure OpenAI model endpoint and trying to get the Dialog Rails example in the NVIDIA Docs (hello_world_3) example with
llm continuation
to work following the example here: https://docs.nvidia.com/nemo/guardrails/colang_2/getting_started/dialog-rails.htmlHowever when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:
ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.
Which originates from here: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/rails/llm/llmrails.py#L882
I am able to use Colang 1 with success, but unable to use Colang 2 when using it in the LangChain chain. When I use
nemoguardrails chat
I am able to test out the rail of "hi", however thellm continuation
dpesnt seem to work when I try to type in what they have in the example above - akka I am not getting any response back and it just spins.I have my
./config/main.co
file as the following:within a Jupyter notebook I have the following:
Steps To Reproduce
Follow printed output above for example set up and jupyter notebook.
Expected Behavior
Actual Behavior
Described above"
However when I try to invoke the chain to test if the RAILS of "hi" is working I get the following error:
ValueError: The `output_vars` option is not supported for Colang 2.0 configurations.
Which originates from here: https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/rails/llm/llmrails.py#L882
The text was updated successfully, but these errors were encountered: