From fbdc58cab9f4ee2be7a5e1f2e2787ecd9311942f Mon Sep 17 00:00:00 2001 From: Vikram Voleti Date: Tue, 19 Mar 2024 02:37:11 +0530 Subject: [PATCH] Fixes typos (#308) Co-authored-by: Vikram Voleti --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index d2f44cc9..96a28364 100644 --- a/README.md +++ b/README.md @@ -5,20 +5,20 @@ ## News **March 18, 2024** -- We are releasing [SV3D](https://huggingface.co/stabilityai/sv3d), an image-to-video model for novel multi-view synthesis, for research purposes: - - SV3D was trained to generate 21 frames at resolution 576x576, given 1 context frame of the same size, ideally a white-background image with one object. - - SV3D_u: This variant generates orbital videos based on single image inputs without camera conditioning.. - - SV3D_p: Extending the capability of SVD3_u, this variant accommodates both single images and orbital views allowing for the creation of 3D video along specified camera paths. +- We are releasing **[SV3D](https://huggingface.co/stabilityai/sv3d)**, an image-to-video model for novel multi-view synthesis, for research purposes: + - **SV3D** was trained to generate 21 frames at resolution 576x576, given 1 context frame of the same size, ideally a white-background image with one object. + - **SV3D_u**: This variant generates orbital videos based on single image inputs without camera conditioning.. + - **SV3D_p**: Extending the capability of **SVD3_u**, this variant accommodates both single images and orbital views allowing for the creation of 3D video along specified camera paths. - We extend the streamlit demo `scripts/demo/video_sampling.py` and the standalone python script `scripts/sampling/simple_video_sample.py` for inference of both models. - Please check our [project page](https://sv3d.github.io), [tech report](https://sv3d.github.io/static/paper.pdf) and [video summary](https://youtu.be/Zqw4-1LcfWg) for more details. -To run SV3D_u on a single image: +To run **SV3D_u** on a single image: - Download `sv3d_u.safetensors` from https://huggingface.co/stabilityai/sv3d to `checkpoints/sv3d_u.safetensors` - Run `python scripts/sampling/simple_video_sample.py --input_path --version sv3d_u` -To run SV3D_u on a single image: +To run **SV3D_p** on a single image: - Download `sv3d_p.safetensors` from https://huggingface.co/stabilityai/sv3d to `checkpoints/sv3d_p.safetensors` -1. Generate static orbit at a specified elevation eg. 10 : `python scripts/sampling/simple_video_sample.py --input_path --version sv3d_p --elevations_deg 10.0` +1. Generate static orbit at a specified elevation eg. 10.0 : `python scripts/sampling/simple_video_sample.py --input_path --version sv3d_p --elevations_deg 10.0` 2. Generate dynamic orbit at a specified elevations and azimuths: specify sequences of 21 elevations (in degrees) to `elevations_deg` ([-90, 90]), and 21 azimuths (in degrees) to `azimuths_deg` [0, 360] in sorted order from 0 to 360. For example: `python scripts/sampling/simple_video_sample.py --input_path --version sv3d_p --elevations_deg [] --azimuths_deg []` To run SVD or SV3D on a streamlit server: