You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Use the first CUDA-enabled GPU, if any
device = 0 if device_count() else -1
llm = HuggingFacePipeline.from_model_id(
model_id=repo_id, device=device, task="text-generation", model_kwargs=params
)
return llm
I get a similar temperature error. That is:
Parameter temperature does not exist for WrapperLLM
and
Error "HuggingFacePipeline" object has no attribute "a_call" while execution generate_user_intent.
I get these errors regardless if I include temperature in params or not.
Any guide you can send me that shows me how to effectively use LLMs like this?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello!
I have been experimenting with some smaller LLMs recently and I can't seem to get the following error figured out for dolly-3b.
I have looked at issue-50:
#50
but the example linked here:
https://github.com/NVIDIA/NeMo-Guardrails/tree/main/examples/llm/hf_pipeline_dolly
seems to not exist anymore (I looked through some old commits to confirm).
When trying to use hugging_face endpoint, I get a temperature error. When I try to wrap a local llm as follows:
`
yaml_content = """
models:
engine: hf_pipeline_dolly
"""
@lru_cache
def get_dolly_v2_3b_llm():
repo_id = "databricks/dolly-v2-3b"
params = {"temperature": 0, "max_length": 1024,"trust_remote_code":True}
HFPipelineDolly = get_llm_instance_wrapper(
llm_instance=get_dolly_v2_3b_llm(), llm_type="hf_pipeline_dolly"
)
register_llm_provider("hf_pipeline_dolly", HFPipelineDolly)
config= RailsConfig.from_content(
colang_content=colang_content,
yaml_content=yaml_content
)
rag_rails = LLMRails(config)
`
I get a similar temperature error. That is:
Parameter temperature does not exist for WrapperLLM
and
Error "HuggingFacePipeline" object has no attribute "a_call" while execution generate_user_intent.
I get these errors regardless if I include temperature in params or not.
Any guide you can send me that shows me how to effectively use LLMs like this?
Beta Was this translation helpful? Give feedback.
All reactions