From c22773b282774f9fd06c5ae9fb41b4e848c3fcbe Mon Sep 17 00:00:00 2001 From: Qianyue He <46109954+Enigmatisms@users.noreply.github.com> Date: Fri, 13 Sep 2024 17:30:30 +0800 Subject: [PATCH 1/4] Initial commit --- README.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..72cc658 --- /dev/null +++ b/README.md @@ -0,0 +1,2 @@ +# DARTS +[ACM TOG & SIGGRAPH Asia 2024 Journal Track] Official code release for paper: "DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media" From 13593827ab05925a8cb2cf3922988fdcfe0c29ed Mon Sep 17 00:00:00 2001 From: Qianyue He <46109954+Enigmatisms@users.noreply.github.com> Date: Fri, 13 Sep 2024 17:52:43 +0800 Subject: [PATCH 2/4] Update README.md --- README.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 72cc658..872d7e8 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,19 @@ # DARTS -[ACM TOG & SIGGRAPH Asia 2024 Journal Track] Official code release for paper: "DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media" + +> ACM TOG (SIGGRAPH Asia 24 Journal Track) +> +> DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media +> +> Qianyue He, Dongyu Du, Haitian Jiang, Xin Jin* + +Coming soon. Jesus, never thought that I would say this one day, but yeah, for real, coming soon (patent-related problems). The code will eventually be open-sourced as the supp material (which is included in the submission), nevertheless, so even if I should forget to update this repo, which would be highly unlikely, I am sure anyone in need can find the code. + +Well, this is the repository of our paper: DARTS: Diffusion Approximated Residual Time Sampling for Time-of-flight Rendering in Homogeneous Scattering Media. + +Yet, the code release will be postponed, most likely before the date of the paper getting officially released. The code is composed of two renderers: +- A modified version of [pbrt-v3](https://github.com/mmp/pbrt-v3) +- A modified version of [Tungsten](https://github.com/tunabrain/tungsten) (actually, [transient-Tungsten](https://github.com/GhostatSpirit)) + +The code base will be huge (including several modified external packages used by pbrt-v3), therefore it takes time to release the code (that is gauranteed to be compile-able and runable). Since the arxiv version of this paper (I use IEEE template to avoid being recognized as SIGGRAPH submission) is already available, and judging by the reviews, the method in this work can be easily reproduced, so if you have any question about the implementation before the code upload, feel free to open an issue to have a discussion with me. + +Apart from other authors, here I'd like to extend my personal gratitude to [Yang Liu](https://github.com/GhostatSpirit), who is the first author of paper ["Temporally sliced photon primitives for time-of-flight rendering"](https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14584). The implementation (photon point methods) of our method is based on the code base of his (listed above) and some of the discussions with him really pushed forward the modification of the Tungsten renderer part. His work for camera-unwarped transient rendering is very solid and inspiring, which definitely deserves more attention. From 343a5b240fca8d0cced78309e1a05ec08f575c27 Mon Sep 17 00:00:00 2001 From: Qianyue He <46109954+Enigmatisms@users.noreply.github.com> Date: Fri, 13 Sep 2024 18:00:05 +0800 Subject: [PATCH 3/4] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 872d7e8..63fe801 100644 --- a/README.md +++ b/README.md @@ -14,6 +14,8 @@ Yet, the code release will be postponed, most likely before the date of the pape - A modified version of [pbrt-v3](https://github.com/mmp/pbrt-v3) - A modified version of [Tungsten](https://github.com/tunabrain/tungsten) (actually, [transient-Tungsten](https://github.com/GhostatSpirit)) +A kind reminder: don't get misled. **Diffusion** here does not mean DDPM related generative AI. It is an optics concept which describes the "diffusion" of photons within the participating medium. The only part Pytorch gets used in this work, is the precomputation of the EDA direction sampling table, which takes merely 5 seconds to complete with `torch.compile` and no back-prop is used. Yeah, I know, this is old-fashioned, sorry about that if you accidentally clicked this repo and want to see how diffusion models are employed. + The code base will be huge (including several modified external packages used by pbrt-v3), therefore it takes time to release the code (that is gauranteed to be compile-able and runable). Since the arxiv version of this paper (I use IEEE template to avoid being recognized as SIGGRAPH submission) is already available, and judging by the reviews, the method in this work can be easily reproduced, so if you have any question about the implementation before the code upload, feel free to open an issue to have a discussion with me. Apart from other authors, here I'd like to extend my personal gratitude to [Yang Liu](https://github.com/GhostatSpirit), who is the first author of paper ["Temporally sliced photon primitives for time-of-flight rendering"](https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14584). The implementation (photon point methods) of our method is based on the code base of his (listed above) and some of the discussions with him really pushed forward the modification of the Tungsten renderer part. His work for camera-unwarped transient rendering is very solid and inspiring, which definitely deserves more attention. From 3136e81f5e5268eb90c8c5ddfc49119317c39907 Mon Sep 17 00:00:00 2001 From: Qianyue He <46109954+Enigmatisms@users.noreply.github.com> Date: Fri, 13 Sep 2024 18:00:56 +0800 Subject: [PATCH 4/4] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 63fe801..50072b0 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ Yet, the code release will be postponed, most likely before the date of the pape - A modified version of [pbrt-v3](https://github.com/mmp/pbrt-v3) - A modified version of [Tungsten](https://github.com/tunabrain/tungsten) (actually, [transient-Tungsten](https://github.com/GhostatSpirit)) -A kind reminder: don't get misled. **Diffusion** here does not mean DDPM related generative AI. It is an optics concept which describes the "diffusion" of photons within the participating medium. The only part Pytorch gets used in this work, is the precomputation of the EDA direction sampling table, which takes merely 5 seconds to complete with `torch.compile` and no back-prop is used. Yeah, I know, this is old-fashioned, sorry about that if you accidentally clicked this repo and want to see how diffusion models are employed. +A kind reminder: don't get misled. **Diffusion** here does not mean DDPM related generative AI. It is an optics concept which describes the "diffusion" of photons within the participating medium. The only part Pytorch gets used in this work, is the precomputation of the EDA direction sampling table, which takes merely 5 seconds to complete with `torch.compile` and no back-prop is used. Yeah, I know, this is old-fashioned, sorry about that if you accidentally click this repo and try to see how diffusion models are employed. The code base will be huge (including several modified external packages used by pbrt-v3), therefore it takes time to release the code (that is gauranteed to be compile-able and runable). Since the arxiv version of this paper (I use IEEE template to avoid being recognized as SIGGRAPH submission) is already available, and judging by the reviews, the method in this work can be easily reproduced, so if you have any question about the implementation before the code upload, feel free to open an issue to have a discussion with me.