Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training DreamFusion with Frequency Encoding #503

Open
KhoiDOO opened this issue Sep 19, 2024 · 2 comments
Open

Training DreamFusion with Frequency Encoding #503

KhoiDOO opened this issue Sep 19, 2024 · 2 comments

Comments

@KhoiDOO
Copy link

KhoiDOO commented Sep 19, 2024

I have been struggle with training DreamFusion with Frequency Encoding and Importance sampling.

Here is the config file that I use

name: "dreamfusion-sd"
tag: "${rmspace:${system.prompt_processor.prompt},_}"
exp_root_dir: "outputs"
seed: 0

data_type: "random-camera-datamodule"
data:
  batch_size: 1
  width: 64
  height: 64
  camera_distance_range: [1.5, 2.0]
  fovy_range: [40, 70]
  elevation_range: [-10, 45]
  light_sample_strategy: "dreamfusion"
  eval_camera_distance: 2.0
  eval_fovy_deg: 70.

system_type: "dreamfusion-system"
system:
  geometry_type: "implicit-volume"
  geometry:
    radius: 2.0
    normal_type: "analytic"

    # use Magic3D density initialization instead
    density_bias: "blob_magic3d"
    density_activation: softplus
    density_blob_scale: 10.
    density_blob_std: 0.5

    pos_encoding_config:
      otype: Frequency
      n_frequencies: 12

    mlp_network_config:
      otype: VanillaMLP
      activation: ReLU
      output_activation: null
      n_neurons: 64
      n_hidden_layers: 1

  material_type: "diffuse-with-point-light-material"
  material:
    ambient_only_steps: 2001
    albedo_activation: sigmoid

  background_type: "neural-environment-map-background"
  background:
    color_activation: sigmoid

  renderer_type: "nerf-volume-renderer"
  renderer:
    radius: ${system.geometry.radius}
    num_samples_per_ray: 512

    estimator: importance
    num_samples_per_ray_importance: 64

  prompt_processor_type: "stable-diffusion-prompt-processor"
  prompt_processor:
    pretrained_model_name_or_path: "stabilityai/stable-diffusion-2-1-base"
    prompt: ???

  guidance_type: "stable-diffusion-guidance"
  guidance:
    pretrained_model_name_or_path: "stabilityai/stable-diffusion-2-1-base"
    guidance_scale: 100.
    weighting_strategy: sds
    min_step_percent: 0.02
    max_step_percent: 0.98

  loggers:
    wandb:
      enable: false
      project: "threestudio"
      name: ${time:}

  loss:
    lambda_sds: 1.
    lambda_orient: [0, 10., 1000., 5000]
    lambda_sparsity: 1.
    lambda_opaque: 0.

  optimizer:
    name: Adam
    args:
      lr: 0.01
      betas: [0.9, 0.99]
      eps: 1.e-15
    params:
      geometry:
        lr: 0.01
      background:
        lr: 0.001

trainer:
  max_steps: 10000
  log_every_n_steps: 1
  num_sanity_val_steps: 0
  val_check_interval: 200
  enable_progress_bar: true
  precision: 16-mixed

checkpoint:
  save_last: true # save at each validation time
  save_top_k: -1
  every_n_train_steps: ${trainer.max_steps}

where I change the pos_encoding_config to settings for Frequency encoding setting from tiny-cuda-nn-documentation

However I got the following errors of not implemented errors in tiny-cuda-nn

  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 981, in _run
    results = self._run_stage()
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1025, in _run_stage
    self.fit_loop.run()
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
    self.advance()
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run
    self.advance(data_fetcher)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 250, in advance
    batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 190, in run
    self._optimizer_step(batch_idx, closure)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 268, in _optimizer_step
    call._call_lightning_module_hook(
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 167, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1306, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 153, in step
    step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 238, in optimizer_step
    return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/amp.py", line 78, in optimizer_step
    closure_result = closure()
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 144, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 138, in closure
    self._backward_fn(step_output.closure_loss)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 239, in backward_fn
    call._call_strategy_hook(self.trainer, "backward", loss, optimizer)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 319, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 212, in backward
    self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision.py", line 72, in backward
    model.backward(tensor, *args, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1101, in backward
    loss.backward(*args, **kwargs)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/torch/_tensor.py", line 521, in backward
    torch.autograd.backward(
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward
    _engine_run_backward(
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/torch/autograd/graph.py", line 768, in _engine_run_backward
    return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/torch/autograd/function.py", line 306, in apply
    return user_fn(self, *args)
  File "/media/mountHDD1/users/luffy/3ddiverse/.env/lib/python3.10/site-packages/tinycudann/modules.py", line 149, in backward
    doutput_grad, params_grad, input_grad = ctx.ctx_fwd.native_tcnn_module.bwd_bwd_input(
RuntimeError: DifferentiableObject::backward_backward_input_impl: not implemented error

Since I used the estimator importance so there is no network in use with the encoding function, I'm not sure this is the problem because I use the proposal network with the same frequency encoding setting and also caught the above error.

I would be appreciated if any helps. Thank you!

@KhoiDOO
Copy link
Author

KhoiDOO commented Sep 19, 2024

What is that file?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
@KhoiDOO and others