Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Easy Feature upgrade (really !)] put the settings higher in "Settings/user interface/Ui Defaults 'Flux" -AKA- (txt2img height to 3300 not 2048 and txt2img width to 3300 not 2048) #2536

Open
Giribot opened this issue Jan 6, 2025 · 2 comments

Comments

@Giribot
Copy link

Giribot commented Jan 6, 2025

Hello,
Flux allows to make images with native resolutions of more than 2048 pixels.
It would be time to push the default limit from 2048 to 3300 (or more?) in "Settings/user interface/Ui Defaults 'Flux'"
because:
txt2img height is fixed to 2048 max
txt2img width is fixed to 2048 max
(leaving the original default settings but leaving the choice to the end user to be able to experiment at his own risk afterwards in the settings menu).

Thank you !

@MisterChief95
Copy link

MisterChief95 commented Jan 6, 2025

Until it gets added to the settings menu, you can make the change yourself in the ui-config.json file. These lines:

"txt2img/Width/maximum": 2048,
"txt2img/Height/maximum": 2048,

img2img should have similarly named fields as well.

@Giribot
Copy link
Author

Giribot commented Jan 7, 2025

thanks !
proof of concept here:
It's working on my low GPU laptop:
(4gb, Nvidia)

Capture d'écran 2025-01-07 125809

But that's not really a reasonable use.
Kudos to Forge's robust code that holds up!

`Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-635-gf5330788
Commit hash: f533078
D:\Data\Packages\Stable Diffusion WebUI Forge\extensions-builtin\forge_legacy_preprocessors\install.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
D:\Data\Packages\Stable Diffusion WebUI Forge\extensions-builtin\sd_forge_controlnet\install.py:2: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
import pkg_resources
Launching Web UI with arguments: --cuda-malloc --cuda-stream --gradio-allowed-path 'D:\Data\Images'
Using cudaMallocAsync backend.
Total VRAM 4096 MB, total RAM 20226 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 Ti Laptop GPU : cudaMallocAsync
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: True
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: D:\Data\Packages\Stable Diffusion WebUI Forge\models\ControlNetPreprocessor
Loading additional modules ... done.
2025-01-07 07:29:59,701 - ControlNet - INFO - ControlNet UI callback registered.
D:\Data\Packages\Stable Diffusion WebUI Forge\extensions\Automatic1111-Geeky-Remb\scripts\geeky-remb.py:582: GradioDeprecationWarning: unexpected argument for Button: label
run_button = gr.Button(label="Run GeekyRemB")
*** Error executing callback ui_tabs_callback for D:\Data\Packages\Stable Diffusion WebUI Forge\extensions\model_preset_manager\scripts\main.py
Traceback (most recent call last):
File "D:\Data\Packages\Stable Diffusion WebUI Forge\modules\script_callbacks.py", line 283, in ui_tabs_callback
res += c.callback() or []
File "D:\Data\Packages\Stable Diffusion WebUI Forge\extensions\model_preset_manager\scripts\main.py", line 463, in on_ui_tabs
model_generation_data = gr.Textbox(label = model_generation_data_label_text(), value = "", lines = 3, elem_id = "def_model_gen_data_textbox").style(show_copy_button=True)
AttributeError: 'Textbox' object has no attribute 'style'


Model selected: {'checkpoint_info': {'filename': 'D:\Data\Packages\Stable Diffusion WebUI Forge\models\Stable-diffusion\sd\flux1DevHyperNF4Flux1DevBNB_flux1DevHyperNF4.safetensors', 'hash': 'a005585e'}, 'additional_modules': ['D:\Data\Packages\Stable Diffusion WebUI Forge\models\text_encoder\ae.safetensors', 'D:\Data\Packages\Stable Diffusion WebUI Forge\models\text_encoder\clip_l.safetensors', 'D:\Data\Packages\Stable Diffusion WebUI Forge\models\text_encoder\t5xxl_fp16.safetensors'], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 43.4s (prepare environment: 9.7s, launcher: 0.8s, import torch: 11.2s, initialize shared: 0.2s, other imports: 0.5s, list SD models: 0.2s, load scripts: 5.1s, initialize google blockly: 10.9s, create ui: 3.2s, gradio launch: 1.6s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 74.99% GPU memory (3071.00 MB) to load weights, and use 25.01% GPU memory (1024.00 MB) to do matrix computation.
Loading Model: {'checkpoint_info': {'filename': 'D:\Data\Packages\Stable Diffusion WebUI Forge\models\Stable-diffusion\sd\flux1DevHyperNF4Flux1DevBNB_flux1DevHyperNF4.safetensors', 'hash': 'a005585e'}, 'additional_modules': ['D:\Data\Packages\Stable Diffusion WebUI Forge\models\text_encoder\ae.safetensors', 'D:\Data\Packages\Stable Diffusion WebUI Forge\models\text_encoder\clip_l.safetensors', 'D:\Data\Packages\Stable Diffusion WebUI Forge\models\text_encoder\t5xxl_fp16.safetensors'], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'transformer': 1722, 'vae': 244, 'text_encoder': 196, 'text_encoder_2': 220, 'ignore': 0}
Using Default T5 Data Type: torch.float16
Using Detected UNet Type: nf4
Using pre-quant state dict!
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.bfloat16}
Model loaded in 5.1s (unload existing model: 0.3s, forge model load: 4.7s).
NeverOOM Enabled for UNet (always maximize offload)
NeverOOM Enabled for VAE (always tiled)
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
VARM State Changed To NO_VRAM
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
[Unload] Trying to free 13464.34 MB for cuda:0 with 0 models keep loaded ... Done.
CPU Swap Loaded (blocked method): 9568.73 MB, GPU Loaded: 73.14 MB
Moving model(s) has taken 0.20 seconds
Distilled CFG Scale: 3.5
[Unload] Trying to free 17532.81 MB for cuda:0 with 0 models keep loaded ... Unload model JointTextEncoder Done.
CPU Swap Loaded (blocked method): 6246.77 MB, GPU Loaded: 0.07 MB
Moving model(s) has taken 0.09 seconds
40%|#### | 8/20 [4:20:52<7:43:39, 2318.27s/it]5:34, 2430.45s/it]`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants