You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, I wrote a script and tried to run the ipadapter and openpose control net demo with SDXL. However, when loading stable_ckpt/garment_extractor.safetensors into sd_pipe.unet, there are following errors. It seems the garment_extractor.safetensors architecture doesn't fit the SDXL architecture. Is there an easy way to adapt the stable_ckpt/garment_extractor.safetensors so that it runs on SDXL? My code is the attachments.
Here are the errors:
Traceback (most recent call last):
File "/root/autodl-tmp/MagicClothing/stable_version/stable_gradio_ipadapter_openpose_xl.py", line 69, in
ip_model = StableIPAdapterFaceID(pipe, garment_extractor_path, garment_ip_layer_path, image_encoder_path, ip_ckpt, device, args.enable_cloth_guidance)
File "/root/autodl-tmp/MagicClothing/garment_adapter/garment_ipadapter_faceid_stable.py", line 75, in init
super().init(sd_pipe, ref_path, image_encoder_path, ip_ckpt, device, enable_cloth_guidance, num_tokens, torch_dtype, set_seg_model)
File "/root/autodl-tmp/MagicClothing/garment_adapter/garment_ipadapter_faceid.py", line 351, in init
ref_unet.load_state_dict(state_dict, strict=False)
File "/root/autodl-tmp/envs/oms-diffusion/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
size mismatch for down_blocks.1.attentions.0.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.0.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.1.attentions.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.2.attentions.0.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.0.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for down_blocks.2.attentions.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for up_blocks.0.resnets.2.norm1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.0.resnets.2.norm1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.0.resnets.2.conv1.weight: copying a param with shape torch.Size([1280, 2560, 3, 3]) from checkpoint, the shape in current model is torch.Size([1280, 1920, 3, 3]).
size mismatch for up_blocks.0.resnets.2.conv_shortcut.weight: copying a param with shape torch.Size([1280, 2560, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1920, 1, 1]).
size mismatch for up_blocks.1.attentions.0.norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.proj_in.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm3.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm3.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.weight: copying a param with shape torch.Size([10240, 1280]) from checkpoint, the shape in current model is torch.Size([5120, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.bias: copying a param with shape torch.Size([10240]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([640, 2560]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.proj_out.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.proj_in.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm3.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm3.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight: copying a param with shape torch.Size([10240, 1280]) from checkpoint, the shape in current model is torch.Size([5120, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias: copying a param with shape torch.Size([10240]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([640, 2560]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.proj_out.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.proj_in.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm3.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm3.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.weight: copying a param with shape torch.Size([10240, 1280]) from checkpoint, the shape in current model is torch.Size([5120, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.bias: copying a param with shape torch.Size([10240]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([640, 2560]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.proj_out.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.norm1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.1.resnets.0.norm1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.1.resnets.0.conv1.weight: copying a param with shape torch.Size([1280, 2560, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 1920, 3, 3]).
size mismatch for up_blocks.1.resnets.0.conv1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.time_emb_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 1280]).
size mismatch for up_blocks.1.resnets.0.time_emb_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.conv2.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.resnets.0.conv2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.conv_shortcut.weight: copying a param with shape torch.Size([1280, 2560, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 1920, 1, 1]).
size mismatch for up_blocks.1.resnets.0.conv_shortcut.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.norm1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for up_blocks.1.resnets.1.norm1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for up_blocks.1.resnets.1.conv1.weight: copying a param with shape torch.Size([1280, 2560, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 1280, 3, 3]).
size mismatch for up_blocks.1.resnets.1.conv1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.time_emb_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 1280]).
size mismatch for up_blocks.1.resnets.1.time_emb_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.conv2.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.resnets.1.conv2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.conv_shortcut.weight: copying a param with shape torch.Size([1280, 2560, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 1280, 1, 1]).
size mismatch for up_blocks.1.resnets.1.conv_shortcut.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.norm1.weight: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.1.resnets.2.norm1.bias: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.1.resnets.2.conv1.weight: copying a param with shape torch.Size([1280, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 960, 3, 3]).
size mismatch for up_blocks.1.resnets.2.conv1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.time_emb_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 1280]).
size mismatch for up_blocks.1.resnets.2.time_emb_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.conv2.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.resnets.2.conv2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.conv_shortcut.weight: copying a param with shape torch.Size([1280, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 960, 1, 1]).
size mismatch for up_blocks.1.resnets.2.conv_shortcut.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.upsamplers.0.conv.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.upsamplers.0.conv.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.0.norm1.weight: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.2.resnets.0.norm1.bias: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.2.resnets.0.conv1.weight: copying a param with shape torch.Size([640, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 960, 3, 3]).
size mismatch for up_blocks.2.resnets.0.conv1.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.time_emb_proj.weight: copying a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is torch.Size([320, 1280]).
size mismatch for up_blocks.2.resnets.0.time_emb_proj.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.norm2.weight: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.norm2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.conv2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3]).
size mismatch for up_blocks.2.resnets.0.conv2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.conv_shortcut.weight: copying a param with shape torch.Size([640, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 960, 1, 1]).
size mismatch for up_blocks.2.resnets.0.conv_shortcut.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.1.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.1.conv1.weight: copying a param with shape torch.Size([640, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 640, 3, 3]).
size mismatch for up_blocks.2.resnets.1.conv1.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.time_emb_proj.weight: copying a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is torch.Size([320, 1280]).
size mismatch for up_blocks.2.resnets.1.time_emb_proj.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.norm2.weight: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.norm2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.conv2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3]).
size mismatch for up_blocks.2.resnets.1.conv2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.conv_shortcut.weight: copying a param with shape torch.Size([640, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 640, 1, 1]).
size mismatch for up_blocks.2.resnets.1.conv_shortcut.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.norm1.weight: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.2.norm1.bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.2.conv1.weight: copying a param with shape torch.Size([640, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 640, 3, 3]).
size mismatch for up_blocks.2.resnets.2.conv1.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.time_emb_proj.weight: copying a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is torch.Size([320, 1280]).
size mismatch for up_blocks.2.resnets.2.time_emb_proj.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.norm2.weight: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.norm2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.conv2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3]).
size mismatch for up_blocks.2.resnets.2.conv2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.conv_shortcut.weight: copying a param with shape torch.Size([640, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 640, 1, 1]).
size mismatch for up_blocks.2.resnets.2.conv_shortcut.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for mid_block.attentions.0.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for mid_block.attentions.0.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
The text was updated successfully, but these errors were encountered:
Hi there, I wrote a script and tried to run the ipadapter and openpose control net demo with SDXL. However, when loading stable_ckpt/garment_extractor.safetensors into sd_pipe.unet, there are following errors. It seems the garment_extractor.safetensors architecture doesn't fit the SDXL architecture. Is there an easy way to adapt the stable_ckpt/garment_extractor.safetensors so that it runs on SDXL? My code is the attachments.
Here are the errors:
Traceback (most recent call last):
File "/root/autodl-tmp/MagicClothing/stable_version/stable_gradio_ipadapter_openpose_xl.py", line 69, in
ip_model = StableIPAdapterFaceID(pipe, garment_extractor_path, garment_ip_layer_path, image_encoder_path, ip_ckpt, device, args.enable_cloth_guidance)
File "/root/autodl-tmp/MagicClothing/garment_adapter/garment_ipadapter_faceid_stable.py", line 75, in init
super().init(sd_pipe, ref_path, image_encoder_path, ip_ckpt, device, enable_cloth_guidance, num_tokens, torch_dtype, set_seg_model)
File "/root/autodl-tmp/MagicClothing/garment_adapter/garment_ipadapter_faceid.py", line 351, in init
ref_unet.load_state_dict(state_dict, strict=False)
File "/root/autodl-tmp/envs/oms-diffusion/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
size mismatch for down_blocks.1.attentions.0.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.0.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.1.attentions.1.proj_in.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for down_blocks.1.attentions.1.proj_out.weight: copying a param with shape torch.Size([640, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for down_blocks.2.attentions.0.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.0.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for down_blocks.2.attentions.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for down_blocks.2.attentions.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for up_blocks.0.resnets.2.norm1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.0.resnets.2.norm1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.0.resnets.2.conv1.weight: copying a param with shape torch.Size([1280, 2560, 3, 3]) from checkpoint, the shape in current model is torch.Size([1280, 1920, 3, 3]).
size mismatch for up_blocks.0.resnets.2.conv_shortcut.weight: copying a param with shape torch.Size([1280, 2560, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1920, 1, 1]).
size mismatch for up_blocks.1.attentions.0.norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.proj_in.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm3.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.norm3.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.weight: copying a param with shape torch.Size([10240, 1280]) from checkpoint, the shape in current model is torch.Size([5120, 640]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.bias: copying a param with shape torch.Size([10240]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([640, 2560]).
size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.0.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.0.proj_out.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.proj_in.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm3.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.norm3.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight: copying a param with shape torch.Size([10240, 1280]) from checkpoint, the shape in current model is torch.Size([5120, 640]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias: copying a param with shape torch.Size([10240]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([640, 2560]).
size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.1.proj_out.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.norm.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.norm.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.proj_in.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([640, 2048]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm3.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.norm3.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.weight: copying a param with shape torch.Size([10240, 1280]) from checkpoint, the shape in current model is torch.Size([5120, 640]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.bias: copying a param with shape torch.Size([10240]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([640, 2560]).
size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.attentions.2.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 640]).
size mismatch for up_blocks.1.attentions.2.proj_out.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.norm1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.1.resnets.0.norm1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1920]).
size mismatch for up_blocks.1.resnets.0.conv1.weight: copying a param with shape torch.Size([1280, 2560, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 1920, 3, 3]).
size mismatch for up_blocks.1.resnets.0.conv1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.time_emb_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 1280]).
size mismatch for up_blocks.1.resnets.0.time_emb_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.conv2.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.resnets.0.conv2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.0.conv_shortcut.weight: copying a param with shape torch.Size([1280, 2560, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 1920, 1, 1]).
size mismatch for up_blocks.1.resnets.0.conv_shortcut.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.norm1.weight: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for up_blocks.1.resnets.1.norm1.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for up_blocks.1.resnets.1.conv1.weight: copying a param with shape torch.Size([1280, 2560, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 1280, 3, 3]).
size mismatch for up_blocks.1.resnets.1.conv1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.time_emb_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 1280]).
size mismatch for up_blocks.1.resnets.1.time_emb_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.conv2.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.resnets.1.conv2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.1.conv_shortcut.weight: copying a param with shape torch.Size([1280, 2560, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 1280, 1, 1]).
size mismatch for up_blocks.1.resnets.1.conv_shortcut.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.norm1.weight: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.1.resnets.2.norm1.bias: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.1.resnets.2.conv1.weight: copying a param with shape torch.Size([1280, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 960, 3, 3]).
size mismatch for up_blocks.1.resnets.2.conv1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.time_emb_proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([640, 1280]).
size mismatch for up_blocks.1.resnets.2.time_emb_proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.conv2.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.resnets.2.conv2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.resnets.2.conv_shortcut.weight: copying a param with shape torch.Size([1280, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 960, 1, 1]).
size mismatch for up_blocks.1.resnets.2.conv_shortcut.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.1.upsamplers.0.conv.weight: copying a param with shape torch.Size([1280, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 640, 3, 3]).
size mismatch for up_blocks.1.upsamplers.0.conv.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.0.norm1.weight: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.2.resnets.0.norm1.bias: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([960]).
size mismatch for up_blocks.2.resnets.0.conv1.weight: copying a param with shape torch.Size([640, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 960, 3, 3]).
size mismatch for up_blocks.2.resnets.0.conv1.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.time_emb_proj.weight: copying a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is torch.Size([320, 1280]).
size mismatch for up_blocks.2.resnets.0.time_emb_proj.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.norm2.weight: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.norm2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.conv2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3]).
size mismatch for up_blocks.2.resnets.0.conv2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.0.conv_shortcut.weight: copying a param with shape torch.Size([640, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 960, 1, 1]).
size mismatch for up_blocks.2.resnets.0.conv_shortcut.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.1.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.1.conv1.weight: copying a param with shape torch.Size([640, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 640, 3, 3]).
size mismatch for up_blocks.2.resnets.1.conv1.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.time_emb_proj.weight: copying a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is torch.Size([320, 1280]).
size mismatch for up_blocks.2.resnets.1.time_emb_proj.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.norm2.weight: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.norm2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.conv2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3]).
size mismatch for up_blocks.2.resnets.1.conv2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.1.conv_shortcut.weight: copying a param with shape torch.Size([640, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 640, 1, 1]).
size mismatch for up_blocks.2.resnets.1.conv_shortcut.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.norm1.weight: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.2.norm1.bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([640]).
size mismatch for up_blocks.2.resnets.2.conv1.weight: copying a param with shape torch.Size([640, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 640, 3, 3]).
size mismatch for up_blocks.2.resnets.2.conv1.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.time_emb_proj.weight: copying a param with shape torch.Size([640, 1280]) from checkpoint, the shape in current model is torch.Size([320, 1280]).
size mismatch for up_blocks.2.resnets.2.time_emb_proj.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.norm2.weight: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.norm2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.conv2.weight: copying a param with shape torch.Size([640, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 320, 3, 3]).
size mismatch for up_blocks.2.resnets.2.conv2.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for up_blocks.2.resnets.2.conv_shortcut.weight: copying a param with shape torch.Size([640, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 640, 1, 1]).
size mismatch for up_blocks.2.resnets.2.conv_shortcut.bias: copying a param with shape torch.Size([640]) from checkpoint, the shape in current model is torch.Size([320]).
size mismatch for mid_block.attentions.0.proj_in.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 768]) from checkpoint, the shape in current model is torch.Size([1280, 2048]).
size mismatch for mid_block.attentions.0.proj_out.weight: copying a param with shape torch.Size([1280, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
The text was updated successfully, but these errors were encountered: