Skip to content

Commit

Permalink
Added flux training instructions
Browse files Browse the repository at this point in the history
  • Loading branch information
jaretburkett committed Aug 10, 2024
1 parent b3e0329 commit 2308ef2
Show file tree
Hide file tree
Showing 4 changed files with 128 additions and 23 deletions.
60 changes: 42 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,9 @@
# AI Toolkit by Ostris

## Special Thanks
I want to give a special thanks to Huggingface for their amazing work, and their continued support of
my work. This repo would not be possible without them.


## IMPORTANT NOTE - READ THIS
This is an active WIP repo that is not ready for others to use. And definitely not ready for non developers to use.
I am making major breaking changes and pushing straight to master until I have it in a planned state. I have big changes
planned for config files and the general structure. I may change how training works entirely. You are welcome to use
but keep that in mind. If more people start to use it, I will follow better branch checkout standards, but for now
this is my personal active experiment.

Report bugs as you find them, but not knowing how to train ML models, setup an environment, or use python is not a bug.
I will make all of this more user-friendly eventually

I will make a better readme later.
This is my research repo. I do a lot of experiments in it and it is possible that I will break things.
If something breaks, checkout an earlier commit. This repo can train a lot of things, and it is
hard to keep up with all of them.

## Installation

Expand Down Expand Up @@ -51,13 +39,49 @@ pip install torch --use-pep517 --extra-index-url https://download.pytorch.org/wh
pip install -r requirements.txt
```

## FLUX.1 Training

### WIP. I am updating docs and optimizing as fast as I can. If there are bugs open a ticket. Not knowing how to get it to work is NOT a bug. Be paitient as I continue to develop it.

Training currently only works with FLUX.1-dev. Which means anything you train will inherit the
non commercial license. It is also a gated model, so you need to accept the license on HF before using it.
Otherwise, this will fail. Here are the required steps to setup a license.

### Requirements
You currently need a dedicated GPU with **at least 24GB of VRAM** to train FLUX.1. If you are using it as your GPU to control
your monitors, it will probably not fit as that takes up some ram. I may be able to get this lower, but for now,
It won't work. It may not work on Windows, I have only tested on linux for now. This is still extremely experimental
and a lot of quantizing and tricks had to happen to get it to fit on 24GB at all.

1. Sign into HF and accept the model access here [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
2. Make a file named `.env` in the root on this folder
3. [Get a READ key from huggingface](https://huggingface.co/settings/tokens/new?) and add it to the `.env` file like so `HF_TOKEN=your_key_here`

### Training
1. Copy the example config file located at `config/examples/train_lora_flux_24gb.yaml` to the `config` folder and rename it to `whatever_you_want.yml`
2. Edit the file following the comments in the file
3. Run the file like so `python3 run.py config/whatever_you_want.yml`

A folder with the name and the training folder from the config file will be created when you start. It will have all
checkpoints and images in it. You can stop the training at any time using ctrl+c and when you resume, it will pick back up
from the last checkpoint.

IMPORTANT. If you press crtl+c while it is saving, it will likely corrupt that checkpoint. So wait until it is done saving

### Need help?

Please do not open a bug report unless it is a bug in the code. You are welcome to [Join my Discord](https://discord.gg/SzVB3wYvxF)
and ask for help there. However, please refrain from PMing me directly with general question or support. Ask in the discord
and I will answer when I can.

### Training in the cloud
Coming very soon. Getting base out then will have a notebook that makes all that work.

---

## Current Tools
## EVERYTHING BELOW THIS LINE IS OUTDATED

I have so many hodge podge scripts I am going to be moving over to this that I use in my ML work. But this is what is
here so far.
It may still work like that, but I have not tested it in a while.

---

Expand Down
85 changes: 85 additions & 0 deletions config/examples/train_lora_flux_24gb.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
---
job: extension
config:
# this name will be the folder and filename name
name: "my_first_flux_lora_v1"
process:
- type: 'sd_trainer'
# root folder to save training sessions/samples/weights
training_folder: "output"
# uncomment to see performance stats in the terminal every N steps
# performance_log_every: 1000
device: cuda:0
# if a trigger word is specified, it will be added to captions of training data if it does not already exist
# alternatively, in your captions you can add [trigger] and it will be replaced with the trigger word
# trigger_word: "p3r5on"
network:
type: "lora"
linear: 16
linear_alpha: 16
save:
dtype: float16 # precision to save
save_every: 250 # save every this many steps
max_step_saves_to_keep: 4 # how many intermittent saves to keep
datasets:
# datasets are a folder of images. captions need to be txt files with the same name as the image
# for instance image2.jpg and image2.txt. Only jpg, jpeg, and png are supported currently
# images will automatically be resized and bucketed into the resolution specified
- folder_path: "/mnt/Datasets/1920s_illustrations"
# - folder_path: "/path/to/images/folder"
caption_ext: "txt"
caption_dropout_rate: 0.05 # will drop out the caption 5% of time
shuffle_tokens: false # shuffle caption order, split by commas
cache_latents_to_disk: true # leave this true unless you know what you're doing
resolution: [ 512, 768, 1024 ] # flux enjoys multiple resolutions
train:
batch_size: 1
steps: 4000 # total number of steps to train
gradient_accumulation_steps: 1
train_unet: true
train_text_encoder: false # probably won't work with flux
content_or_style: balanced # content, style, balanced
gradient_checkpointing: true # need the on unless you have a ton of vram
noise_scheduler: "flowmatch" # for training only
optimizer: "adamw8bit"
lr: 4e-4

# ema will smooth out learning, but could slow it down. Recommended to leave on.
ema_config:
use_ema: true
ema_decay: 0.99

# will probably need this if gpu supports it for flux, other dtypes may not work correctly
dtype: bf16
model:
# huggingface model name or path
name_or_path: "black-forest-labs/FLUX.1-dev"
is_flux: true
quantize: true # run 8bit mixed precision
sample:
sampler: "flowmatch" # must match train.noise_scheduler
sample_every: 250 # sample every this many steps
width: 1024
height: 1024
prompts:
# you can add [trigger] to the prompts here and it will be replaced with the trigger word
# - "[trigger] holding a sign that says 'I LOVE PROMPTS!'"\
- "woman with red hair, playing chess at the park, bomb going off in the background"
- "a woman holding a coffee cup, in a beanie, sitting at a cafe"
- "a horse in a night club dancing, fish eye lens, smoke machine, lazer lights, holding a martini, large group"
- "a man showing off his cool new t shirt at the beach, a shark is jumping out of the water in the background"
- "a bear building a log cabin in the snow covered mountains"
- "woman playing the guitar, on stage, singing a song, laser lights, punk rocker"
- "hipster man with a beard, building a chair, in a wood shop"
- "photo of a man, white background, medium shot, modeling clothing, studio lighting, white backdrop"
- "a man holding a sign that says, 'this is a sign'"
- "a bulldog, in a post apocalyptic world, with a shotgun, in a leather jacket, in a desert, with a motorcycle"
neg: "" # not used on flux
seed: 42
walk_seed: true
guidance_scale: 4
sample_steps: 20
# you can add any additional meta info here. [name] is replaced with config name at top
meta:
name: "[name]"
version: '1.0'
2 changes: 1 addition & 1 deletion toolkit/config_modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ def __init__(self, **kwargs):
if self.lorm_config.do_conv:
self.conv = 4

self.transformer_only = kwargs.get('transformer_only', False)
self.transformer_only = kwargs.get('transformer_only', True)


AdapterTypes = Literal['t2i', 'ip', 'ip+', 'clip', 'ilora', 'photo_maker', 'control_net']
Expand Down
4 changes: 0 additions & 4 deletions toolkit/pipelines.py
Original file line number Diff line number Diff line change
Expand Up @@ -1349,7 +1349,6 @@ def __call__(

noise_pred_text = self.transformer(
hidden_states=latents,
# YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transforme rmodel (we should not keep it but I want to keep the inputs same for the model for testing)
timestep=timestep / 1000,
guidance=guidance,
pooled_projections=pooled_prompt_embeds,
Expand All @@ -1363,7 +1362,6 @@ def __call__(
# todo combine these
noise_pred_uncond = self.transformer(
hidden_states=latents,
# YiYi notes: divide it by 1000 for now because we scale it by 1000 in the transforme rmodel (we should not keep it but I want to keep the inputs same for the model for testing)
timestep=timestep / 1000,
guidance=guidance,
pooled_projections=negative_pooled_prompt_embeds,
Expand All @@ -1376,8 +1374,6 @@ def __call__(

noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond)



# compute the previous noisy sample x_t -> x_t-1
latents_dtype = latents.dtype
latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
Expand Down

0 comments on commit 2308ef2

Please sign in to comment.