-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix AutoAWQQuantizer GptqQuantizer supported ep and device #1571
base: main
Are you sure you want to change the base?
Conversation
@@ -228,15 +228,15 @@ | |||
}, | |||
"AutoAWQQuantizer": { | |||
"module_path": "olive.passes.pytorch.autoawq.AutoAWQQuantizer", | |||
"supported_providers": [ "CPUExecutionProvider" ], | |||
"supported_accelerators": [ "cpu" ], | |||
"supported_providers": [ "CUDAExecutionProvider" ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we should make it *
for both providers and ep. the quantization happens on pytorch model and the exported model is compatible with all eps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
their only requirement is that they require gpus to run. but that is a host machine requirement, not target provider or ep.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this supported_providers
mean the output model supports which ep? Same concept for supported_accelerators
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes. it's used by the auto-opt to filter out the passes based on the intended target ep
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I can see some other passes like QNNConversion
is also in this list. But QNN model doesn't not have any ep concept. If this is only used by auto-opt, and auto-opt is targeting onnx model, should those unrelated pass whose output model is not onnx model be removed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shaahji can comment more on it. I think in general, we can just use * for passes that don't deal with onnx models. that way if they are scheduled by the auto-opt, they don't get filtered out.
Describe your changes
Fix AutoAWQQuantizer GptqQuantizer supported ep and device.
Checklist before requesting a review
lintrunner -a
(Optional) Issue link