Integrate NeMo Guardrails with Custom LLM Framework #187
joaocp98662
started this conversation in
General
Replies: 1 comment 3 replies
-
Hi @joaocp98662 ! You don't have to "use LangChain" per se. Only wrap the LLM interface that you have into something that is compatible with LangChain's interface. As an example, the NeMo LLM provider which we've added recently (https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/nemoguardrails/llm/providers/nemollm.py) implements the LangChain interface (i.e. Let me know if you need more guidance on this. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi guys,
For context, my team is working on a project to develop a chatbot and we would really like to use NeMo Guardrails. We are using open-source models from HuggingFace Hub, so each model we use is downloaded to disk and loaded using transformers library. We have our implementation for handling prompts, and other components. For inference we use the generate method from transformers' model generate.
We are having some problems on how to integrate what we already have developed with NeMo Guardrails. The documentation as a section about use custom LLMs in the Configuration Guide that states that is possible to register a custom LLM provider, by creating a class that inherits from Langchain's BaseLanguageModel class. We would like to use the framework we are developing, instead of using Langchain. Is this possible or it has to be integrated with Langchain?
TL;DR: How to integrate NeMo Guadrails with our own LLM framework instead of using Langchain? Is it possible?
We'd really appreciate your help.
Thank you very much
Beta Was this translation helpful? Give feedback.
All reactions