Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] arange converter dynamic shape bug #2894

Open
peri044 opened this issue Jun 6, 2024 · 2 comments
Open

🐛 [Bug] arange converter dynamic shape bug #2894

peri044 opened this issue Jun 6, 2024 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@peri044
Copy link
Collaborator

peri044 commented Jun 6, 2024

Bug Description

  1. Should this be moved to aten_ops_converters.py?
  2. I tried to pass in static shape ITensor end to the test case of this converter but got error: aten::arange() Expected a value of type 'number' for argument 'end' but instead found type 'FakeTensor'. because the schema says the type of the arguments is Scalar
  3. In terms of the implementation, 1) shape seems not to be always valid, 2) output type is decided by end.dtype but it's possible the start is float and end is int. e.g.

torch.arange(1.2, 15, 2.2)
tensor([ 1.2000, 3.4000, 5.6000, 7.8000, 10.0000, 12.2000, 14.4000])
torch.ops.aten.div.Tensor_mode(13.8, 2.2, rounding_mode="trunc")
tensor(6.)
I think the actual length should be 7 but the length of shape is 6. Please correct me if I'm wrong.

To Reproduce

Steps to reproduce the behavior:

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

@peri044 peri044 added the bug Something isn't working label Jun 6, 2024
@peri044
Copy link
Collaborator Author

peri044 commented Jun 6, 2024

@zewenli98

  1. Having it in op_evaluators is fine due to its duality
  2. How are you passing static shape ITensor to the converter ? Can you show your testcase ?
  3. Seems like a bug. I'll work on a fix.

@peri044 peri044 changed the title 🐛 [Bug] arange converter dynamic shape questions 🐛 [Bug] arange converter dynamic shape bug Jun 6, 2024
@zewenli98
Copy link
Collaborator

  1. I used test case like:
def test_arange_dynamic(self):
    class Arange(nn.Module):
        def forward(self, end_tensor):
            return torch.ops.aten.arange.start_step(0, end_tensor, 1)

    pyt_input = 7
    inputs = [
        torch.tensor(pyt_input, dtype=torch.int32),
    ]
    self.run_test(
        Arange(),
        inputs,
    )

and got the error:

...
    arange_start_step = torch.ops.aten.arange.start_step(0, end_tensor, 1);  end_tensor = None
  File "/home/zewenl/anaconda3/envs/trt-10-py310/lib/python3.10/site-packages/torch/_ops.py", line 610, in __call__
    return self_._op(*args, **kwargs)
RuntimeError: aten::arange() Expected a value of type 'number' for argument 'end' but instead found type 'Tensor'.
Position: 1
Value: tensor(7, device='cuda:0', dtype=torch.int32)
Declaration: aten::arange.start_step(Scalar start, Scalar end, Scalar step=1, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
Cast error details: Cannot cast tensor(7, device='cuda:0', dtype=torch.int32) to number

To execute this test, run the following from the base repo dir:
     python test_arange_aten.py -k test_arange_dynamic

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0

----------------------------------------------------------------------
Ran 1 test in 0.149s

The schema shows arange and its variants only support Scalar instead of Tensor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants