You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I installed the custom tool into Azure Promptflow. I am using a llama-7b-text-generation MaaS running on Azure.
When testing my Promptflow, the first problem was that the torch library was not installed in the runtime environment. Once I installed that I received an error Run failed: KeyError: 0.
I grabbed the following requirements from the example found in Promptflow GitHub but still had no luck.
For now I am going to try to use the Please let me know any other information I can provide.
Steps to reproduce
Start a MaaS model on Azure
Install the tool as a custom tool in your compute instance.
Install the requirements needed to run a simple example.
Run an example with required inputs.
Expected Behavior
Receive an error with the traceback.
Logs
Run failed: KeyError: 0
Traceback (most recent call last):
File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 556, in wrapped
output = func(*args, **kwargs)
File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/llmlingua_promptflow/tools/llmlingua.py", line 1613, in prompt_compress
res = llm_lingua.compress_prompt(context=prompt, rate=rate, use_sentence_level_filter=False, use_context_level_filter=False)
File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/llmlingua_promptflow/tools/llmlingua.py", line 574, in compress_prompt
context = self.trunk_token_compress(
File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/llmlingua_promptflow/tools/llmlingua.py", line 1278, in trunk_token_compress
compressed_input_ids = np.concatenate([self.api_results[id][0] for id in range(trunk_num)], axis=1)
File "/azureml-envs/prompt-flow/runtime/lib/python3.9/site-packages/llmlingua_promptflow/tools/llmlingua.py", line 1278, in <listcomp>
compressed_input_ids = np.concatenate([self.api_results[id][0] for id in range(trunk_num)], axis=1)
KeyError: 0
Additional Information
No response
The text was updated successfully, but these errors were encountered:
Hi @chris-chatsi , thanks for your feedback.
The KeyError occurred because the API did not return logits. Can you test your MaaS model with the following code to check whether you can print logits of the prompt ? You only need to replace the value of your_api_endpoint and your_api_key.
Describe the bug
I installed the custom tool into Azure Promptflow. I am using a llama-7b-text-generation MaaS running on Azure.
When testing my Promptflow, the first problem was that the
torch
library was not installed in the runtime environment. Once I installed that I received an errorRun failed: KeyError: 0
.I grabbed the following requirements from the example found in Promptflow GitHub but still had no luck.
For now I am going to try to use the Please let me know any other information I can provide.
Steps to reproduce
Expected Behavior
Receive an error with the traceback.
Logs
Additional Information
No response
The text was updated successfully, but these errors were encountered: