Skip to content

Releases: rupeshs/fastsdcpu

v1.0.0-beta.22

03 Dec 15:27
0ddbb9c
Compare
Choose a tag to compare
v1.0.0-beta.22 Pre-release
Pre-release
  • Added SD Turbo model(Pytorch/OpenVINO) support (1-step inference)
  • Added SD Turbo/SDXL Turbo models image to image support

v1.0.0-beta.21

29 Nov 16:05
Compare
Choose a tag to compare
v1.0.0-beta.21 Pre-release
Pre-release
  • Added SDXL Turbo OpenVINO support (2.5 seconds to generate image)

v1.0.0-beta.20

29 Nov 01:41
Compare
Choose a tag to compare
v1.0.0-beta.20 Pre-release
Pre-release
  • Added support for 1-step ultra-fast text-to-image generation (SDXL Turbo)

v1.0.0-beta.19

27 Nov 02:47
Compare
Choose a tag to compare
v1.0.0-beta.19 Pre-release
Pre-release
  • Fixed tiny decoder issue in the image-to-image pipeline

v1.0.0-beta.18

26 Nov 15:54
7b2d1c4
Compare
Choose a tag to compare
v1.0.0-beta.18 Pre-release
Pre-release
  • Added image-to-image support (Use WebUI)
  • Improved WebUI
  • OpenVINO image-to-image support

v1.0.0-beta.17

22 Nov 13:48
Compare
Choose a tag to compare
v1.0.0-beta.17 Pre-release
Pre-release
  • Configurable lcm models, check configs/lcm-models.txt file
  • Fixed Qt GUI layout spacing problem

v1.0.0-beta.16

19 Nov 11:04
Compare
Choose a tag to compare
v1.0.0-beta.16 Pre-release
Pre-release
  • Added 2 steps inference for LCM-LoRA workflow

v1.0.0-beta.15

18 Nov 15:57
4aea116
Compare
Choose a tag to compare
v1.0.0-beta.15 Pre-release
Pre-release
  • Fast 2,3 steps inference
  • Lcm-Lora fused models for faster inference
  • Added real-time text to image generation on CPU (Experimental)
  • Fixed DPI scale issue
  • Fixed SDXL tiny auto decoder issue
  • Supports integrated GPU(iGPU) using OpenVINO (export DEVICE=GPU)
  • 5.7x speed using OpenVINO(steps: 2,tiny autoencoder)

v1.0.0-beta.13

12 Nov 16:59
3ff82ff
Compare
Choose a tag to compare
v1.0.0-beta.13 Pre-release
Pre-release
  • Added support for custom models for OpenVINO (LCM-LoRA baked)
  • Added negative prompt support for OpenVINO models (Set guidance >1.0)
  • 2x faster inference on CPU

v1.0.0-beta.12

11 Nov 16:35
Compare
Choose a tag to compare
v1.0.0-beta.12 Pre-release
Pre-release
  • Added SDXL, SSD - 1B LCM models
  • Added LCM-LoRA support, which works well for fine-tuned Stable Diffusion model 1.5 or SDXL models
  • Added negative prompt support in LCM-LoRA mode
  • LCM-LoRA models can be configured using a text configuration file