We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
==================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
bin /opt/conda/lib/python3.8/site-packages/bitsandbytes-0.39.1-py3.8.egg/bitsandbytes/libbitsandbytes_cuda116.so CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 8.0 CUDA SETUP: Detected CUDA version 116 CUDA SETUP: Loading binary /opt/conda/lib/python3.8/site-packages/bitsandbytes-0.39.1-py3.8.egg/bitsandbytes/libbitsandbytes_cuda116.so... /opt/conda/lib/python3.8/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory warn(f"Failed to load image Python extension: {e}") num of inst captions, masks, boxes and points: 4 4 4 4 config for evaluation: {'diffusion': {'target': 'ldm.models.diffusion.ldm.LatentDiffusion', 'params': {'linear_start': 0.00085, 'linear_end': 0.012, 'timesteps': 1000}}, 'model': {'target': 'ldm.modules.diffusionmodules.openaimodel.UNetModel', 'params': {'image_size': 64, 'in_channels': 4, 'out_channels': 4, 'model_channels': 320, 'attention_resolutions': [4, 2, 1], 'num_res_blocks': 2, 'channel_mult': [1, 2, 4, 4], 'num_heads': 8, 'transformer_depth': 1, 'context_dim': 768, 'fuser_type': 'gatedSA', 'use_checkpoint': True, 'sd_v1_5': True, 'efficient_attention': True, 'grounding_tokenizer': {'target': 'ldm.modules.diffusionmodules.text_grounding_net.UniFusion', 'params': {'in_dim': 768, 'out_dim': 768, 'mid_dim': 3072, 'train_add_boxes': True, 'train_add_points': True, 'train_add_scribbles': True, 'train_add_masks': True, 'test_drop_boxes': False, 'test_drop_points': False, 'test_drop_scribbles': True, 'test_drop_masks': True, 'use_seperate_tokenizer': True}}}}, 'autoencoder': {'target': 'ldm.models.autoencoder.AutoencoderKL', 'params': {'scale_factor': 0.18215, 'embed_dim': 4, 'ddconfig': {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}}}, 'text_encoder': {'target': 'ldm.modules.encoders.modules.FrozenCLIPEmbedder'}, 'train_dataset_names': {'Grounding': {'which_layer_text': 'before', 'image_size': 512, 'max_boxes_per_data': 30, 'prob_use_caption': 1.0, 'random_crop': False, 'random_flip': True}}, 'grounding_tokenizer_input': {'target': 'grounding_input.text_grounding_tokinzer_input.GroundingNetInput'}} Working with z of shape (1, 4, 32, 32) = 4096 dimensions. saved image with boxes at OUTPUT/gc7.5-seed0-alpha0.8/4_boxes.png Loading pipeline components...: 40%|█████████████▏ | 2/5 [01:08<01:42, 34.00s/it] Traceback (most recent call last): File "inference.py", line 310, in run(meta, model, autoencoder, text_encoder, diffusion, clip_model, clip_processor, config, grounding_tokenizer_input, starting_noise, guidance_scale=args.guidance_scale) File "/opt/conda/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "inference.py", line 114, in run pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1286, in from_pretrained loaded_sub_model = load_sub_model( File "/opt/conda/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 531, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "/opt/conda/envs/transformers/src/transformers/tokenization_utils_base.py", line 1846, in from_pretrained return cls._from_pretrained( File "/opt/conda/envs/transformers/src/transformers/tokenization_utils_base.py", line 2009, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/opt/conda/envs/transformers/src/transformers/models/clip/tokenization_clip.py", line 328, in init with open(merges_file, encoding="utf-8") as merges_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType
The text was updated successfully, but these errors were encountered:
Hi, it seems like that the `merges_file' your provided is None. merges_file should be the path of the files.
Sorry, something went wrong.
No branches or pull requests
==================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /opt/conda/lib/python3.8/site-packages/bitsandbytes-0.39.1-py3.8.egg/bitsandbytes/libbitsandbytes_cuda116.so
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 8.0
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /opt/conda/lib/python3.8/site-packages/bitsandbytes-0.39.1-py3.8.egg/bitsandbytes/libbitsandbytes_cuda116.so...
/opt/conda/lib/python3.8/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
num of inst captions, masks, boxes and points: 4 4 4 4
config for evaluation: {'diffusion': {'target': 'ldm.models.diffusion.ldm.LatentDiffusion', 'params': {'linear_start': 0.00085, 'linear_end': 0.012, 'timesteps': 1000}}, 'model': {'target': 'ldm.modules.diffusionmodules.openaimodel.UNetModel', 'params': {'image_size': 64, 'in_channels': 4, 'out_channels': 4, 'model_channels': 320, 'attention_resolutions': [4, 2, 1], 'num_res_blocks': 2, 'channel_mult': [1, 2, 4, 4], 'num_heads': 8, 'transformer_depth': 1, 'context_dim': 768, 'fuser_type': 'gatedSA', 'use_checkpoint': True, 'sd_v1_5': True, 'efficient_attention': True, 'grounding_tokenizer': {'target': 'ldm.modules.diffusionmodules.text_grounding_net.UniFusion', 'params': {'in_dim': 768, 'out_dim': 768, 'mid_dim': 3072, 'train_add_boxes': True, 'train_add_points': True, 'train_add_scribbles': True, 'train_add_masks': True, 'test_drop_boxes': False, 'test_drop_points': False, 'test_drop_scribbles': True, 'test_drop_masks': True, 'use_seperate_tokenizer': True}}}}, 'autoencoder': {'target': 'ldm.models.autoencoder.AutoencoderKL', 'params': {'scale_factor': 0.18215, 'embed_dim': 4, 'ddconfig': {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}}}, 'text_encoder': {'target': 'ldm.modules.encoders.modules.FrozenCLIPEmbedder'}, 'train_dataset_names': {'Grounding': {'which_layer_text': 'before', 'image_size': 512, 'max_boxes_per_data': 30, 'prob_use_caption': 1.0, 'random_crop': False, 'random_flip': True}}, 'grounding_tokenizer_input': {'target': 'grounding_input.text_grounding_tokinzer_input.GroundingNetInput'}}
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
saved image with boxes at OUTPUT/gc7.5-seed0-alpha0.8/4_boxes.png
Loading pipeline components...: 40%|█████████████▏ | 2/5 [01:08<01:42, 34.00s/it]
Traceback (most recent call last):
File "inference.py", line 310, in
run(meta, model, autoencoder, text_encoder, diffusion, clip_model, clip_processor, config, grounding_tokenizer_input, starting_noise, guidance_scale=args.guidance_scale)
File "/opt/conda/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "inference.py", line 114, in run
pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1286, in from_pretrained
loaded_sub_model = load_sub_model(
File "/opt/conda/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 531, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/opt/conda/envs/transformers/src/transformers/tokenization_utils_base.py", line 1846, in from_pretrained
return cls._from_pretrained(
File "/opt/conda/envs/transformers/src/transformers/tokenization_utils_base.py", line 2009, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/opt/conda/envs/transformers/src/transformers/models/clip/tokenization_clip.py", line 328, in init
with open(merges_file, encoding="utf-8") as merges_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
The text was updated successfully, but these errors were encountered: