Skip to content

DEVRhylme-Foundation/guided-inpainting

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Guided Inpainting

teaser

Many video editing tasks such as rotoscoping or object removal require the propagation of context across frames. While transformers and other attention-based approaches that aggregate features globally have demonstrated great success at propagating object masks from keyframes to the whole video, they struggle to propagate high-frequency details such as textures faithfully. We hypothesize that this is due to an inherent bias of global attention towards low-frequency features. To overcome this limitation, we present a two-stream approach, where high-frequency features interact locally and low-frequency features interact globally. The global interaction stream remains robust in difficult situations such as large camera motions, where explicit alignment fails. The local interaction stream propagates high-frequency details through deformable feature aggregation and, informed by the global interaction stream, learns to detect and correct errors of the deformation field. We evaluate our two-stream approach for inpainting tasks, where experiments show that it improves both the propagation of features within a single frame as required for image inpainting, as well as their propagation from keyframes to target frames. Applied to video inpainting, our approach leads to 44% and 26% improvements in FID and LPIPS scores.

Towards Unified Keyframe Propagation Models
Patrick Esser, Peter Michael, Soumyadip Sengupta

comparison_small.mp4

Video results for all DAVIS sequences.

Requirements

conda env create -f env.yaml
conda activate guided-inpainting

Download the raft-things.pth checkpoint for RAFT and place into checkpoints/flow/raft/raft-things.pth.

Download the encoder_epch_20.pth checkpoint for the ade20k-resnet50dilated-ppm_deepsup perceptual loss of LaMa and place into checkpoints/lama/ade20k/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth.

Evaluation

To reproduce the results from Table 2, download the validation data to data/places365/lama/val_guided/. Download the desired pre-trained checkpoint(s) and run

python gi/main.py --base configs/<model>.yaml --gpus 0, --train false --resume_from_checkpoint models/<model>.ckpt

To reproduce the results in Table 3, set up the DEVIL benchmark, download the pre-computed results and run DEVIL on it. Note that at the moment we do not plan to release the propagation code.

Training

Follow the training split preparation of Places as in LaMa and place into data/places365/data_large. Start the training with

python gi/main.py --base configs/<model>.yaml --gpus 0,1

Shoutouts

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%