Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run my own video? #9

Open
zhanghongyong123456 opened this issue Jan 3, 2024 · 5 comments
Open

How to run my own video? #9

zhanghongyong123456 opened this issue Jan 3, 2024 · 5 comments

Comments

@zhanghongyong123456
Copy link

How can I use it on my own data

@dendenxu
Copy link
Member

dendenxu commented Jan 3, 2024

Hi @zhanghongyong123456 . There are several documentations to look into:

  1. We listed some examples to run some baseline methods on a multi-view dataset in the readme here.
  2. In the documentation on static scene, we showed how to run EasyVolcap on a video file. This doc includes the full process of extracting camera poses with COLMAP, convert camera parameters and then train models.
  3. In this documentation, we showed the full process of how to prepare a custom multi-view dataset from multiple videos, including mask extraction, bounding box & visual hull preparation and background preparation.

@aleatorydialogue
Copy link

Is possible to render from a custom dataset generated from monocular video? I understand quality may suffer depending on dataset. I am very excited about the potential here, thank you for all your work.

@dendenxu
Copy link
Member

Hi @aleatorydialogue , are you referring to monocular static or dynamic dataset?

For static dataset, we have a complete guide on data preparation, model training and inference here.

For monocular dynamic dataset, it indeed sounds like an interesting direction. Thus we're currently working on open-sourcing official support for deformable algorithms (like DyNeRF, NeRFies) as you can see by the deformation field entries in the default volumetric_video_network.py file. The full data preparation and model training pipeline will be open-sourced along the deformable model. We've already finalized the implementation and are currently working out kinks, so expect an update soon!

@aleatorydialogue
Copy link

awesome, appreciate the reply. Yes I am hoping to be able to do monocular dynamic datasets. For now, this will obviously limit quality and probably limit the novel views that are possible to infer, but I imagine over time we will have the ability to infer with less and less input. I look forward to the update, in the meantime i will work on your examples to get up to speed. Very promising project, thank you for open sourcing

@aleatorydialogue
Copy link

Just wanted to check back in here. Still very focused on dynamic monocular video, still haven't had too much success. Especially with regard to quality

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants