Skip to content

Commit

Permalink
fix g3 dequant (pytorch#7683)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#7683

fix dequant signature. This is cadence::dequantize_per_tensor.out custom op so the function signatures are different. Eventually we would want to use the same signature as quantized_decomposed::* but that would require use to make this change for all backends.

Reviewed By: hsharma35

Differential Revision: D68109702
  • Loading branch information
zonglinpeng authored and facebook-github-bot committed Jan 16, 2025
1 parent 1b7b10e commit 26fb017
Showing 1 changed file with 6 additions and 5 deletions.
11 changes: 6 additions & 5 deletions backends/cadence/fusion_g3/operators/op_dequantize.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ void check_dequantize_per_tensor_args(

ET_CHECK_MSG(
input.scalar_type() == dtype,
"input.scalar_type() %" PRId8 " is not matching dtype argumenta:",
static_cast<int8_t>(input.scalar_type()));
"input.scalar_type() %s is not matching dtype arguments:",
::executorch::runtime::toString(input.scalar_type()));

if (out_dtype.has_value()) {
ET_CHECK_MSG(
Expand Down Expand Up @@ -561,11 +561,12 @@ Tensor& dequantize_per_tensor_out(
const Tensor& input,
double scale,
int64_t zero_point,
int64_t quant_min,
int64_t quant_max,
__ET_UNUSED int64_t quant_min,
__ET_UNUSED int64_t quant_max,
ScalarType dtype,
::executorch::aten::optional<ScalarType> out_dtype,
Tensor& out) {
constexpr ScalarType out_dtype = ScalarType::Float;

#ifdef OP_ARG_CHECK
torch::executor::Error err = resize_tensor(out, input.sizes());
ET_CHECK_MSG(
Expand Down

0 comments on commit 26fb017

Please sign in to comment.