Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: input 3 is none #7614

Open
jds250 opened this issue Jan 11, 2025 · 13 comments
Open

Error: input 3 is none #7614

jds250 opened this issue Jan 11, 2025 · 13 comments
Labels
partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@jds250
Copy link

jds250 commented Jan 11, 2025

Title

Error: input 3 is none when running Llama example in QNN ExecuTorch on Android


Description

I followed the instructions in the [Llama2 README](https://github.com/pytorch/executorch/blob/main/examples/qualcomm/oss_scripts/llama2/README.md) to run the llama.py script using QNN ExecuTorch on Android. The execution process fails with the error input 3 is none, and metadata seems to be read from the model twice during execution.


Steps to Reproduce

  1. Environment setup:

    • QNN SDK version: 2.26.0.240828
    • Platform: Qualcomm SM8650
    • Android NDK: r26d
  2. Run the following command:

    python llama.py -b executorch/build-android  -s 112dhb -m SM8650 \
        --ptq 16a4w --checkpoint stories110M.pt --params params.json \
        --tokenizer_model tokenizer.model --tokenizer_bin tokenizer.bin \
        --prompt "what is python?" \
        --pre_gen_pte executorch/examples/qualcomm/oss_scripts/llama2/llama2_qnn/

LOG

I 00:00:00.001788 executorch:runner.cpp:65] Creating LLaMa runner: model_path=llama2_qnn.pte, tokenizer_path=tokenizer.bin
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[WARNING] [Qnn ExecuTorch]:  <W> Initializing HtpProvider

[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
I 00:00:00.269470 executorch:runner.cpp:80] Reading metadata from model
I 00:00:00.269729 executorch:runner.cpp:139] get_vocab_size: 32000
I 00:00:00.269816 executorch:runner.cpp:139] get_bos_id: 1
I 00:00:00.269872 executorch:runner.cpp:139] get_eos_id: 2
I 00:00:00.269926 executorch:runner.cpp:139] get_n_bos: 1
I 00:00:00.269977 executorch:runner.cpp:139] get_n_eos: 1
I 00:00:00.270026 executorch:runner.cpp:139] get_max_seq_len: 1024
I 00:00:00.270081 executorch:runner.cpp:139] get_head_dim: 64
I 00:00:00.270129 executorch:runner.cpp:139] get_dim: 768
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[WARNING] [Qnn ExecuTorch]:  <W> qnnOpPackageManager: hexagon unload op package function pointer is nullptr!

[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
I 00:00:14.273970 executorch:runner.cpp:80] Reading metadata from model
I 00:00:14.274117 executorch:runner.cpp:139] get_vocab_size: 32000
I 00:00:14.274138 executorch:runner.cpp:139] get_bos_id: 1
I 00:00:14.274154 executorch:runner.cpp:139] get_eos_id: 2
I 00:00:14.274169 executorch:runner.cpp:139] get_n_bos: 1
I 00:00:14.274186 executorch:runner.cpp:139] get_n_eos: 1
I 00:00:14.274203 executorch:runner.cpp:139] get_max_seq_len: 1024
I 00:00:14.274217 executorch:runner.cpp:139] get_head_dim: 64
I 00:00:14.274230 executorch:runner.cpp:139] get_dim: 768
E 00:00:14.286718 executorch:module.cpp:185] input 3 is none
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[WARNING] [Qnn ExecuTorch]:  <W> qnnOpPackageManager: hexagon unload op package function pointer is nullptr!

[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

/data/local/tmp/jds/executorch/single_llama/outputs/: 1 file pulled.
Results[0]:

Finish the running pre_gen_pte from /home/jds/executorch/examples/qualcomm/oss_scripts/llama2/llama2_qnn/

So I found there is no output in my output file.

adb logcat

BTW I also notice that there is some fastrpc error: (maybe I don't have the root)

01-11 14:54:55.560  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/fastrpc_apps_user.c:3592: Error 0xd: open_shell failed for domain 3 search paths used are /dsp/, /vendor/dsp/, /vendor/dsp/xdsp/ (errno Permission denied)
01-11 14:54:55.599  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /odm/lib/rfsa/adsp : errno is Permission denied
01-11 14:54:55.599  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /vendor/lib/rfsa/adsp/ : errno is Permission denied
01-11 14:54:55.599  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /system/vendor/lib/rfsa/adsp : errno is Permission denied
01-11 14:54:55.599  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /vendor/lib64/rfs/dsp : errno is Permission denied
01-11 14:54:55.599  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /vendor/lib/rfsa/adsp : errno is Permission denied
01-11 14:54:55.610  9667  9669 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0xd: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x1f050100
01-11 14:54:55.639  9667  9669 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0xd: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x1f050100
01-11 14:54:55.640  9667  9669 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0x2: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x13050100
01-11 14:54:55.656  9667  9669 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0xd: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x1f050100
01-11 14:54:55.657  9667  9669 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0x2: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x13050100
01-11 14:54:55.712  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/fastrpc_apps_user.c:1473: Error 0x80000414: remote_handle64_invoke failed for handle 0xb4000079eace8210, method 3 on domain 3 (sc 0x3010100) (errno Success)
01-11 14:54:55.909  9667  9671 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/fastrpc_notif.c:57:Error 0xc: notif_fastrpc_thread FastRPC notification worker thread exited for domain 3 (errno Success), notif_domain_deinit started 0
01-11 14:54:55.926  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/fastrpc_apps_user.c:3592: Error 0xd: open_shell failed for domain 3 search paths used are /dsp/, /vendor/dsp/, /vendor/dsp/xdsp/ (errno Permission denied)
01-11 14:54:55.959  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /odm/lib/rfsa/adsp : errno is Permission denied
01-11 14:54:55.959  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /vendor/lib/rfsa/adsp/ : errno is Permission denied
01-11 14:54:55.959  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /system/vendor/lib/rfsa/adsp : errno is Permission denied
01-11 14:54:55.959  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /vendor/lib64/rfs/dsp : errno is Permission denied
01-11 14:54:55.959  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/log_config.c:605:Error : Unable to add watcher for folder /vendor/lib/rfsa/adsp : errno is Permission denied
01-11 14:54:55.969  9667  9674 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0xd: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x1f050100
01-11 14:54:55.993  9667  9674 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0xd: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x1f050100
01-11 14:54:55.993  9667  9674 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0x2: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x13050100
01-11 14:54:56.007  9667  9674 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0xd: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x1f050100
01-11 14:54:56.008  9667  9674 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/mod_table.c:863: Error 0x2: open_mod_table_handle_invoke failed for handle:0x63df7da8, sc:0x13050100
01-11 14:54:56.053  9667  9667 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/fastrpc_apps_user.c:1473: Error 0x80000414: remote_handle64_invoke failed for handle 0xb4000079eacc04c0, method 3 on domain 3 (sc 0x3010100) (errno Success)
01-11 14:55:16.853  9667  9676 E qnn_llama_runner: vendor/qcom/proprietary/adsprpc/src/fastrpc_notif.c:57:Error 0xc: notif_fastrpc_thread FastRPC notification worker thread exited for domain 3 (errno Success), notif_domain_deinit started 0

I wonder if it is necessary to get the root to deploy our model?

@shewu-quic
Copy link
Collaborator

shewu-quic commented Jan 13, 2025

Hi @jds250,

Thanks for trying.
Could you please let me know which branch of ExecuTorch you used and what command you used to export pte file?

I believe that it should not be necessary to get the root to run your model.

@jds250
Copy link
Author

jds250 commented Jan 13, 2025

Hi @jds250,

Thanks for trying. Could you please let me know which branch of ExecuTorch you used and what command you used to export pte file?

I believe that it should not be necessary to get the root to run your model.

Hi, I am using the branch release/0.4, and here is my step to reproduce, it seems that exporting pte file is included in the llama.py script, which is in the examples/qualcomm/oss_scripts/llama2, and the pte file is generated in the llama_qnn folder.

Step 1: Setup

  1. Follow the tutorial to set up ExecuTorch.
  2. Follow the tutorial to build Qualcomm AI Engine Direct Backend.

Step2: Prepare Model

Download and preapre stories110M model

# tokenizer.model & stories110M.pt:
wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt"
wget "https://raw.githubusercontent.com/karpathy/llama2.c/master/tokenizer.model"

# tokenizer.bin:
python -m extension.llm.tokenizer.tokenizer -t tokenizer.model -o tokenizer.bin

# params.json:
echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json

Step3: Run default examples

Default example generates the story based on the given prompt, "Once".

# 16a4w quant:
python llama.py -b /home/jds/executorch/build-android -s 1f1fa994 -m SM8650 --ptq 16a4w --checkpoint stories110M.pt --params params.json --tokenizer_model tokenizer.model --tokenizer_bin tokenizer.bin --prompt "what is python?" --pre_gen_pte /home/jds/executorch/examples/qualcomm/oss_scripts/llama2/llama2_qnn/

@shewu-quic
Copy link
Collaborator

Got it. Let me clarify one thing.
To use the --pre_gen_pte argument, you need to compile first and obtain the PTE file. After that, you can use this argument to skip the compilation step.
You have compiled it first, right?

@shewu-quic
Copy link
Collaborator

Oh, I see. It's a bug to set input. We have a fix in this PR.
d174637#diff-e37e4f997bb5f213089dd0dd2314ff2327452cce92d87ff4fe014086a1e93f12

If possible, could you use main branch?

@jds250
Copy link
Author

jds250 commented Jan 13, 2025

Got it. Let me clarify one thing. To use the --pre_gen_pte argument, you need to compile first and obtain the PTE file. After that, you can use this argument to skip the compilation step. You have compiled it first, right?

yes, I have compiled it first

@jds250
Copy link
Author

jds250 commented Jan 13, 2025

Oh, I see. It's a bug to set input. We have a fix in this PR. d174637#diff-e37e4f997bb5f213089dd0dd2314ff2327452cce92d87ff4fe014086a1e93f12

If possible, could you use main branch?

Thank you! I will try it again

@shewu-quic
Copy link
Collaborator

BTW, if you are interested in llama 3.2, we have provided this script to export and run.
You can find it here. https://github.com/pytorch/executorch/tree/main/examples/qualcomm/oss_scripts/llama3_2

To enhance user experience, we will integrate our script for Llama as soon as possible.

@michaelk77
Copy link

michaelk77 commented Jan 13, 2025

Hi @shewu-quic,

I am experiencing a very similar issue where the model does not respond, and in the logcat, I see the error input 2 is none. The app notes that the model answered in 0.005 seconds, but the output is an empty message.

Environment:

  • Branch: main
  • OS: Ubuntu 24.04 LTS
  • QNN SDK version: v2.26.0.240828 (works with export, but the model doesn't respond correctly).
  • Other QNN versions: Encountered error 1 during model loading on all versions except v2.26.0.240828.
    • Note: For testing different QNN versions, I fully deleted executorch and the environment each time, then rebuilt the Android application and model to ensure a clean setup.
  • Model: Llama 3.2 1B

Steps Tried:

  1. Model Export:

    • Exported using python -m examples.models.llama.export_llama with quantization qnn_16a4w.
    • On v2.26.0.240828, the model produces nonsensical outputs.
  2. Alternative Approach:

    • I tried using the script llama.py from the examples/qualcomm/oss_scripts/llama3_2 directory with the following command:
      python examples/qualcomm/oss_scripts/llama3_2/llama.py \
        -b build-android \
        -m SM8475 \
        --checkpoint "consolidated.00.pth" \
        --params "original_params.json" \
        --ptq 16a4w \
        --model_size 1B \
        --tokenizer_model "tokenizer.model" \
        --prompt "what is 1+1" \
        --temperature 0 \
        --model_mode kv \
        --prefill_seq_len 32 \
        --kv_seq_len 128 \
        --compile_only
  3. Outcome:

    • The model does not respond.
    • Logcat error: input 2 is none.
  4. Additional Issue:

    • During execution, the following traceback error occurs:
      Traceback (most recent call last):
        File "/home/mihail/executorch/examples/qualcomm/oss_scripts/llama3_2/llama.py", line 928, in <module>
          main()
        File "/home/mihail/executorch/examples/qualcomm/oss_scripts/llama3_2/llama.py", line 889, in main
          quant_attrs = compile(args, pte_filename)
        File "/home/mihail/executorch/examples/qualcomm/oss_scripts/llama3_2/llama.py", line 488, in compile
          llama_instance_list[0].lowering_modules(
        File "/home/mihail/executorch/examples/qualcomm/oss_scripts/llama3_2/llama.py", line 369, in lowering_modules
          with open(f"{work_space}/{pte_filename}.pte", "wb") as file:
      NameError: name 'pte_filename' is not defined
      
    • It seems the variable pte_filename is undefined and should be replaced with self.pte_filename.

Request for Help:

Could you please advise if there are any additional fixes or specific steps to resolve these issues?

Thank you for your support! I appreciate any guidance you can provide.

@lucylq lucylq added partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jan 13, 2025
@shewu-quic
Copy link
Collaborator

Hi @michaelk77,

Thanks for trying.
Let me clarify your questions.

  1. Issue with examples.models.llama.export_llama
    We discovered that the model definition in llama_transformer.py isn’t optimal for running the Llama model on the QNN backend. We’ve initiated a new definition in examples/qualcomm/oss_scripts/llama3_2 and are working towards achieving more reasonable output.
  2. Runtime error in examples/qualcomm/oss_scripts/llama3_2
    Could you please provide the command you used to run PTE and specify the device you’re using?
  3. Additional issue
    Thanks for pointing out. We have a PR to address it.

Feel free to let me know if you need any further assistance!

@michaelk77
Copy link

Hi @shewu-quic,

Thank you for your response and clarification!

Runtime Environment:

  • I am running the Llama model using the Android demo app: LlamaDemo.
  • Device: iQOO 10 Pro (8GB/256GB), Snapdragon 8 Gen1 Plus, Android 14.

Updated Status:

  • With the updated main branch, I was able to switch to QNN version 2.28.0.241029.
  • However, the issue persists when running the kv_llama3_2_qnn.pte model:
    • Logcat Error: input 2 is none.
    • Android App Behavior: The app notes that the model responded, but the output is empty.

PTE Generation:

I generated the kv_llama3_2_qnn.pte model using the following command:

python examples/qualcomm/oss_scripts/llama3_2/llama.py \
  -b build-android \
  -m SM8475 \
  --checkpoint "consolidated.00.pth" \
  --params "original_params.json" \
  --ptq 16a4w \
  --model_size 1B \
  --tokenizer_model "tokenizer.model" \
  --prompt "what is 1+1" \
  --temperature 0 \
  --model_mode kv \
  --prefill_seq_len 32 \
  --kv_seq_len 128 \
  --compile_only

Model Source:

I am using the model files from Meta Llama 3.2 1B Instruct on Hugging Face.

If you need additional logs or further details, please let me know. I appreciate your assistance!

@shewu-quic
Copy link
Collaborator

Thanks for your information,

Could you please use the following command to run pte?
Because I think the generated pte from static llama may not be integrated into the demo app yet.

python examples/qualcomm/oss_scripts/llama3_2/llama.py \
  -b build-android \
  -m SM8475 \
  --checkpoint "consolidated.00.pth" \
  --params "original_params.json" \
  --ptq 16a4w \
  --model_size 1B \
  --tokenizer_model "tokenizer.model" \
  --prompt "what is 1+1" \
  --temperature 0 \
  --model_mode kv \
  --prefill_seq_len 32 \
  --kv_seq_len 128 \
  --pre_gen_pte ${path_to_your_pte_directory}

@michaelk77
Copy link

Thank you for providing the command to run the PTE. I have executed the provided command with a minor addition to specify my device using the -s flag:

python examples/qualcomm/oss_scripts/llama3_2/llama.py \
  -b build-android \
  -m SM8475 \
  --checkpoint "consolidated.00.pth" \
  --params "original_params.json" \
  --ptq 16a4w \
  --model_size 1B \
  --tokenizer_model "tokenizer.model" \
  --prompt "what is 1+1" \
  --temperature 0 \
  --model_mode kv \
  --prefill_seq_len 32 \
  --kv_seq_len 128 \
  --pre_gen_pte ${path_to_your_pte_directory} \
  -s # my device code from ADB.

Observations:

  • The model response is extremely unusual and doesn't seem coherent. Here's the output:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
what is 1+1<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Python
Code
Python
Code
AllYouThER

Python

Code
A
Python

Python

A

Code

Python

**
**

**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
**
Finish the running pre_gen_pte from /home/mihail/executorch/llama3_2_qnn

Log Details:

Here is the relevant portion of the logcat output during execution:

[INFO] [Qnn ExecuTorch]: Deserializing processed data using QnnContextCustomProtocol
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[WARNING] [Qnn ExecuTorch]:  <W> Initializing HtpProvider

[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[INFO] [Qnn ExecuTorch]: QnnContextCustomProtocol expected magic number: 0x5678abcd but get: 0x2000000
[WARNING] [Qnn ExecuTorch]:  <W> Cost Based unsupported on soc SM8475

[INFO] [Qnn ExecuTorch]: Running level=1 optimization.
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[WARNING] [Qnn ExecuTorch]:  <W> Function not called, PrepareLib isn't loaded!

Performance Stats:

The PyTorchObserver logs indicate the following:

  • Prompt Tokens: 16
  • Generated Tokens: 111
  • Total Inference Time: ~360 ms
  • The app notes that the model responded, but the content is gibberish.

Could you let me know if there’s any misconfiguration or additional step I should take? Thank you for your assistance!

@shewu-quic
Copy link
Collaborator

Hi @michaelk77

Sorry for late reply.
We have tested on SM8650 and got more reasonable output.
If possible, could you please try a newer device?
About accuracy issue, we are trying to fix it with QAT.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants