Skip to content

tarepan/Scyclone-PyTorch

Repository files navigation

Scyclone-PyTorch

Open In Colab Paper

Reimplmentation of voice conversion system "Scyclone" with PyTorch.

Demo

ToDO: Link super great impressive high-quatity audio demo.

How to Use

Quick training

Jump to Notebook in Google Colaboratory, then Run. that's all!!

Install

pip install git+https://github.com/tarepan/Scyclone-PyTorch

Training

python -m scyclonepytorch.main_train

For arguments, please check ./scyclonepytorch/args.py

Current repo status

Spec2Spec Model and training is totally same with original paper (no WaveRNN Vocoder, using Griffin-Lim).
I trained with small dataset (only 64/64 utterances), but it do not work well.
Original paper use over 4000/4000 utterances, so using big dataset will be needed.
Training itself is very fast (within few days), I encollege you to try training with you dataset!!
I am grad if you share the result. Wait in repository issue!!

Training Speed

3.37 [iter/sec] @ NVIDIA T4 Google Colaboratory (AMP+)

Original paper

Paper

@misc{2005.03334,
Author = {Masaya Tanaka and Takashi Nose and Aoi Kanagaki and Ryohei Shimizu and Akira Ito},
Title = {Scyclone: High-Quality and Parallel-Data-Free Voice Conversion Using Spectrogram and Cycle-Consistent Adversarial Networks},
Year = {2020},
Eprint = {arXiv:2005.03334},
}

Original Paper's Demo

Difference from original research

  • Datum length is based on a paper, not poster (G160/D128 in paper, G240/D240 in poster. Detail is in my summary blog)
  • Use Automatic Mixed Precision training (FP32 training is also supported through no_amp flag)

Dependency Notes

PyTorch version

PyTorch version: PyTorch v1.6 is working (We checked with v1.6.0).

For dependency resolution, we do NOT explicitly specify the compatible versions.
PyTorch have several distributions for various environment (e.g. compatible CUDA version.)
Unfortunately it make dependency version management complicated for dependency management system.
In our case, the system poetry cannot handle cuda variant string (e.g. torch>=1.6.0 cannot accept 1.6.0+cu101.)
In order to resolve this problem, we use torch==*, it is equal to no version specification.
Setup.py could resolve this problem (e.g. torchaudio's setup.py), but we will not bet our effort to this hacky method.