Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda runtime error: 304 #489

Open
Sc1anso opened this issue Jul 10, 2024 · 0 comments
Open

Cuda runtime error: 304 #489

Sc1anso opened this issue Jul 10, 2024 · 0 comments

Comments

@Sc1anso
Copy link

Sc1anso commented Jul 10, 2024

Hi, i'm trying to export a mesh generated using threestudio but i have this error:

Seed set to 0
[INFO] Using 16bit Automatic Mixed Precision (AMP)
[INFO] GPU available: True (cuda), used: True
[INFO] TPU available: False, using: 0 TPU cores
[INFO] HPU available: False, using: 0 HPUs

[INFO] You are using a CUDA device ('NVIDIA GeForce RTX 4080') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
/mnt/c/Users/Scianso/Documents/GitHub/threestudio/threestudio/data/uncond.py:400: UserWarning: Using torch.cross without specifying the dim arg is deprecated.

Please either pass the dim explicitly or simply use torch.linalg.cross.

The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ../aten/src/ATen/native/Cross.cpp:62.)
right: Float[Tensor, "B 3"] = F.normalize(torch.cross(lookat, up), dim=-1)

[INFO] Restoring states from the checkpoint path at outputs/dreamfusion-sd/a_pineapple/ckpts/last.ckpt
[INFO] LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
[INFO] Loaded model weights from the checkpoint at outputs/dreamfusion-sd/a_pineapple/ckpts/last.ckpt
/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:424: The 'predict_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argumenttonum_workers=27in theDataLoader` to improve performance.

Predicting: | | 0/? [00:00<?, ?
it/s]/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
WARNING: dzn is not a conformant Vulkan implementation, testing use only.
WARNING: Some incorrect rendering might occur because the selected Vulkan device (Microsoft Direct3D12 (NVIDIA GeForce RTX 4080)) doesn't support base Zink requirements: feats.features.logicOp have_EXT_custom_border_color have_EXT_line_rasterization
Predicting DataLoader 0: 0%| | 0/120 [00:00<?, ?it/s]/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py:255: predict returned None if it was on purpose, ignore this warning...
Predicting DataLoader 0: 100%|███████████████████████████████████████████████████████| 120/120 [00:00<00:00, 365.90it/s][INFO] Using xatlas to perform UV unwrapping, may take a while ...

[INFO] Exporting textures ...

Traceback (most recent call last):
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/launch.py", line 301, in
main(args, extras)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/launch.py", line 259, in main
trainer.predict(system, datamodule=dm, ckpt_path=cfg.resume)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 863, in predict
return call._call_and_handle_interrupt(
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 902, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
results = self._run_stage()
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1025, in _run_stage
return self.predict_loop.run()
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
return loop_run(self, *args, **kwargs)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py", line 130, in run
return self.on_run_end()
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py", line 202, in on_run_end
results = self._on_predict_epoch_end()
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py", line 368, in _on_predict_epoch_end
call._call_lightning_module_hook(trainer, "on_predict_epoch_end")
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 159, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/threestudio/systems/base.py", line 332, in on_predict_epoch_end
exporter_output: List[ExporterOutput] = self.exporter()
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/threestudio/models/exporters/mesh_exporter.py", line 47, in call
return self.export_obj_with_mtl(mesh)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/threestudio/models/exporters/mesh_exporter.py", line 87, in export_obj_with_mtl
rast, _ = self.ctx.rasterize_one(
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/threestudio/utils/rasterize.py", line 46, in rasterize_one
rast, rast_db = self.rasterize(pos[None, ...], tri, resolution)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/threestudio/utils/rasterize.py", line 37, in rasterize
return dr.rasterize(self.ctx, pos.float(), tri.int(), resolution, grad_db=True)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/nvdiffrast/torch/ops.py", line 310, in rasterize
return _rasterize_func.apply(glctx, pos, tri, resolution, ranges, grad_db, -1)
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/torch/autograd/function.py", line 598, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/mnt/c/Users/Scianso/Documents/GitHub/threestudio/tstudio/lib/python3.10/site-packages/nvdiffrast/torch/ops.py", line 246, in forward
out, out_db = _get_plugin(gl=True).rasterize_fwd_gl(raster_ctx.cpp_wrapper, pos, tri, resolution, ranges, peeling_idx)

RuntimeError: Cuda error: 304[cudaGraphicsGLRegisterBuffer(&s.cudaPosBuffer, s.glPosBuffer, cudaGraphicsRegisterFlagsWriteDiscard);]

does someone knows how to fix this?
this is the command that i used: python launch.py --config outputs/dreamfusion-sd/a_pineapple/configs/parsed.yaml --export --gpu 0 resume=outputs/dreamfusion-sd/a_pineapple/ckpts/last.ckpt system.exporter_type=mesh-exporter

running on wsl2 Ubuntu 22.04 LTS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant