Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

variables needed for gradient computation has been modified #832

Open
Dannynis opened this issue Sep 23, 2024 · 4 comments
Open

variables needed for gradient computation has been modified #832

Dannynis opened this issue Sep 23, 2024 · 4 comments

Comments

@Dannynis
Copy link

Hey, im trying to use this great framework to do texture optimization but im facing the next isssue,
all the steps to reproduce are in this colab notebook.

https://colab.research.google.com/drive/1K6oT4DD7NroacY3jd6k-62QI7PzLGnXM?usp=sharing

key code:

mesh.materials[0].diffuse_texture = torch.nn.Parameter(mesh.materials[0].diffuse_texture)
r = kal.render.easy_render.render_mesh(camera, mesh.cuda())
(r['render'].sum()).backward()

and the error:

File "/usr/local/lib/python3.10/dist-packages/kaolin/render/lighting/sg.py", line 340, in sg_warp_specular_term
h /= torch.sqrt(_dot(h, h))
(Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:111.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass

Thank you!

@OfirShechter
Copy link

Having pretty much the same issue... Any solution so far?

@Caenorst
Copy link
Collaborator

Hi @Dannynis , @OfirShechter ,

I'm going to push a fix soon, will be in next release

@Caenorst
Copy link
Collaborator

I just merged the fix, will be appearing in our next release (should be this week! :) )

@Dannynis
Copy link
Author

works great, thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants