Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TracerWarning: torch.tensor results are registered as constants in the trace. #95

Open
zhj296409022 opened this issue Oct 22, 2024 · 1 comment

Comments

@zhj296409022
Copy link

zhj296409022 commented Oct 22, 2024

When I'm exporting disk+lightglue, does this warning mean that this variable is fixed to the value it had when exporting to ONNX? If that's the case, will there be any issues when running disk?

/usr/local/lib/python3.10/dist-packages/kornia/feature/disk/_unets/unet.py:39: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if inp.size(1) != self.in_features:
/usr/local/lib/python3.10/dist-packages/kornia/feature/disk/unets/unet.py:45: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if (inp.size(2) % input_size_divisor != 0) or (inp.size(3) % input_size_divisor != 0):
/usr/local/lib/python3.10/dist-packages/kornia/feature/disk/disk.py:57: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if unet_output.shape[1] != self.desc_dim + 1:
/root/LightGlue-ONNX/lightglue_onnx/disk.py:47: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
kpts_len = torch.tensor(scores.shape[0]) # Still dynamic despite trace warning
/root/LightGlue-ONNX/lightglue_onnx/disk.py:47: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad
(True), rather than torch.tensor(sourceTensor).
kpts_len = torch.tensor(scores.shape[0]) # Still dynamic despite trace warning
/root/LightGlue-ONNX/lightglue_onnx/disk.py:48: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
max_keypoints = torch.minimum(torch.tensor(n), kpts_len)
/usr/local/lib/python3.10/dist-packages/torch/onnx/symbolic_helper.py:1515: UserWarning: ONNX export mode is set to TrainingMode.EVAL, but operator 'instance_norm' is set to train=True. Exporting with train=True.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/torch/onnx/symbolic_opset9.py:5858: UserWarning: Exporting aten::index operator of advanced indexing in opset 17 is achieved by combination of multiple ONNX operators, including Reshape, Transp

@fabio-sim
Copy link
Owner

Hi @zhj296409022, thank you for your interest in LightGlue-ONNX.

I believe you're using the legacy export script. Could you try exporting with the newer dynamo.py script? It uses a different version of the DISK module (https://github.com/fabio-sim/LightGlue-ONNX/blob/main/lightglue_dynamo/models/disk/disk.py) that I refactored so that the trace is correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants