This repository contains SoTA algorithms, models, and interesting projects in the area of content generation, including ChatGPT detection and Stable Diffusion, and will be continously updated.
ONE is short for "ONE for all" and "Optimal generators with No Exception" (credits to GPT-4).
Hello MindSpore from Stable Diffusion 3!
-
2024.06.13 🚀🚀🚀 mindone/diffusers now supports Stable Diffusion 3. Give it a try yourself!
import mindspore from mindone.diffusers import StableDiffusion3Pipeline pipe = StableDiffusion3Pipeline.from_pretrained( "stabilityai/stable-diffusion-3-medium-diffusers", mindspore_dtype=mindspore.float16, ) prompt = "A cat holding a sign that says 'Hello MindSpore'" image = pipe(prompt)[0][0] image.save("sd3.png")
-
2024.05.23
- Two OpenSora models are supported!
- hpcai-OpenSora based on VAE+STDiT
- PKU-OpenSora based on CausalVAE3D+Latte_T2V
- diffusers is now runnable with MindSpore (experimental)
- Two OpenSora models are supported!
-
2024.03.22
-
2024.03.04
- New generative models released!
- AnimateDiff v1, v2, and v3
- Pangu Draw v3 for Chinese text-to-image generation
- Stable Video Diffusion(SVD) for image-to-video generation
- Tune-a-Video for one-shot video tuning.
- Enhanced Stable Diffusion and Stable Diffusion XL with more add-ons: ControlNet, T2I-Adapter, and IP-Adapter.
- New generative models released!
-
2023.07.01 stable diffusion 2.0 lora fine-tune example can be found here
-
ChatGPT Detection: Detect whether the input texts are generated by ChatGPT
-
Stable Diffusion 1.5/2.x: Text-to-image generation via latent diffusion models (with support for inference and finetuning)
-
Stable Diffusion XL: New state-of-the-art SD model with double text embedders and larger UNet.
-
VideoComposer: Generate videos with prompts or reference videos via controllable video diffusion (both training and inference are supported)
-
AnimateDiff: SoTA text-to-video generation models (including v1, v2, and v3) supporting motion lora fine-tuning.