Changing/Specifying the path to all-MiniLM-L6-v2 #97
-
Hi I just started exploring this library today. Is it possible to change the path to the sentence transformer or specify the path or change it to another sentence transformer? I'm unable to connect to the internet through my environment so I have to download the models locally and upload them to ADLS and load them from there. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 1 reply
-
Hi @jvhuang1786 ! Currently it only possible to change the name of the model (https://github.com/NVIDIA/NeMo-Guardrails/blob/main/nemoguardrails/actions/llm/generation.py#L73). This is not documented so we'll fix this. models:
...
- type: embedding
engine: SentenceTransformer
model: all-MiniLM-L6-v2
parameters:
cache_path: ... |
Beta Was this translation helpful? Give feedback.
-
The link shared is broken. Can you please share if there is any alternative to use this package offline? |
Beta Was this translation helpful? Give feedback.
-
Hi @Haxeebraja ! The easiest way would be to add support for using the
If you have a bit of time to test this and contribute back a PR that would be great. Razvan |
Beta Was this translation helpful? Give feedback.
-
One of the ways I tried that works is to set local path for embeddings model in config file: models:
|
Beta Was this translation helpful? Give feedback.
Hi @jvhuang1786 !
Currently it only possible to change the name of the model (https://github.com/NVIDIA/NeMo-Guardrails/blob/main/nemoguardrails/actions/llm/generation.py#L73). This is not documented so we'll fix this.
Adding support for a
cache_path
parameter is not very difficult. If you have a bit of time to contribute, I can point you in the right direction (needs to be added here: https://github.com/NVIDIA/NeMo-Guardrails/blob/main/nemoguardrails/kb/basic.py#L31) and we need to make sure we propagate it correctly from the config: