Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

looking for qfd360_sl_model.pt for facedetlite model.py #146

Open
MartialTerran opened this issue Jan 3, 2025 · 2 comments
Open

looking for qfd360_sl_model.pt for facedetlite model.py #146

MartialTerran opened this issue Jan 3, 2025 · 2 comments
Labels
question Please ask any questions on Slack. This issue will be closed once responded to.

Comments

@MartialTerran
Copy link

The example model.py at https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/face_det_lite/model.py
and
https://huggingface.co/qualcomm/Lightweight-Face-Detection-Quantized
refer3ences a parameter checkpoint file named qfd360_sl_model.pt
DEFAULT_WEIGHTS = "qfd360_sl_model.pt"

But, this checkpoint file is not provided in the adjacent https://github.com/quic/ai-hub-models/tree/main/qai_hub_models/models/face_det_lite

At this other location, https://huggingface.co/qualcomm/Lightweight-Face-Detection-Quantized/tree/main
there are "quantized" model weights associated with qualcomm Lightweight-Face-Detection-Quantized

So, there is a file mismatch between model.py (looking for qfd360_sl_model.pt) and the elsewhere available pretrained model parameters. So, 1) please explain how to convert model.py to load parameters from the available at https://huggingface.co/qualcomm/Lightweight-Face-Detection-Quantized/tree/main
and 2) please provide the referenced qfd360_sl_model.pt at https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/face_det_lite/

@mestrona-3 mestrona-3 added the bug Something isn't working label Jan 7, 2025
@shreyajn
Copy link

shreyajn commented Jan 7, 2025

When you run export.py for this model, the weights used for the model would be downloaded to your local machine. Then, the model format is loaded with the downloaded weights and traced torch script model is created. This model is then uploaded to AI Hub for compiled to be run on device. Similarly for quantized models, you can run the export script to get the model files.

Huggingface repo hosts the three target formats -QNN, ONN, Lite RT and not the torch script model / weights.

Please let us know if you hit any issues when running the export scripts.

@MartialTerran
Copy link
Author

MartialTerran commented Jan 8, 2025 via email

@mestrona-3 mestrona-3 added question Please ask any questions on Slack. This issue will be closed once responded to. and removed bug Something isn't working labels Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Please ask any questions on Slack. This issue will be closed once responded to.
Projects
None yet
Development

No branches or pull requests

3 participants