Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hello! I get the following problem when I run edit.py, what can I do to fix it: #3

Open
Y-T-cyber opened this issue Sep 11, 2024 · 16 comments

Comments

@Y-T-cyber
Copy link

No description provided.

@Y-T-cyber Y-T-cyber changed the title hello! I get the following problem when I run edit.py, what can I do to fix it: hello! I get the following problem when I run edit.py, what can I do to fix it: Sep 11, 2024
@Y-T-cyber
Copy link
Author

Traceback (most recent call last):
File "edit.py", line 54, in
images, _ = run_and_displayloss_TR(prompts, controller, run_baseline=False, latent=x_t,
File "edit.py", line 9, in run_and_displayloss_TR
images, x_t = text2image_ldm_stable_TR(ldm_stable, prompts, controller, latent=latent,
File "/data-c/heyiliang/anaconda3/envs/mag/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/data-c/heyiliang/code/MAG-Edit/code_tr/network.py", line 919, in text2image_ldm_stable_TR
latent_optimize_new =update_latent_TR(model, latents1, latents2, text_embeddings1, text_embeddings2,None,controller, t,i,max_iter=max_iter,scale=scale)
File "/data-c/heyiliang/code/MAG-Edit/code_tr/network.py", line 829, in update_latent_TR
loss3 = compute_ca_TR(atten, controller.adj_mask, controller.word, controller.prompt)
File "/data-c/heyiliang/code/MAG-Edit/code_tr/network.py", line 751, in compute_ca_TR
loss = loss / (len(attn_maps) * (len(word[1])))
ZeroDivisionError: float division by zero

@bimsarapathiraja
Copy link

Me too

@Orannue
Copy link
Collaborator

Orannue commented Oct 10, 2024

Thank you for your interest in our work! Could you please provide the script you are running and print out len(attn_maps) and len(word[1]) to check if they are zero? This will help us determine if there is an issue with the --target_word parameter or with saving the CA Maps.

@SuanToupoi
Copy link

hello! I get the following problem when I run edit.py。 Then I printed len (attd_maps) and len (word [1]) with values of 0 and 1, respectively。
Traceback (most recent call last):
File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 54, in
images, _ = run_and_displayloss_TR(prompts, controller, run_baseline=False, latent=x_t,
File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 9, in run_and_displayloss_TR
images, x_t = text2image_ldm_stable_TR(ldm_stable, prompts, controller, latent=latent,
File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 979, in text2image_ldm_stable_TR
latent_optimize_new =update_latent_TR(model, latents1, latents2, text_embeddings1, text_embeddings2,None,controller, t,i,max_iter=max_iter,scale=scale)
File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 889, in update_latent_TR
loss3 = compute_ca_TR(atten, controller.adj_mask, controller.word, controller.prompt)
File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 815, in compute_ca_TR
loss = loss / (len(attn_maps) * (len(word[1])))
ZeroDivisionError: float division by zero

@zerojin0603
Copy link

Hello! I’m experiencing the same issue. Were you able to resolve it?

@Orannue
Copy link
Collaborator

Orannue commented Oct 30, 2024

Hi @Y-T-cyber @SuanToupoi @bimsarapathiraja @zerojin0603
Could you please provide the version of the diffusers library you're using? It would also be really helpful if you could share the files you're running for better context. If it's more convenient, feel free to send them to my email at [email protected].

@zerojin0603
Copy link

Hi @Y-T-cyber @SuanToupoi @bimsarapathiraja @zerojin0603, Could you please provide the version of the diffusers library you're using? It would also be really helpful if you could share the files you're running for better context. If it's more convenient, feel free to send them to my email at [email protected].

Hello,
Thank you for your response.

I’m currently using diffusers==0.19.3, as I encountered issues loading the models with earlier versions.
For running, I’ve used a random 1024x1024 image file from the CelebA-HQ dataset.

@Orannue
Copy link
Collaborator

Orannue commented Oct 30, 2024

Hi @Y-T-cyber @SuanToupoi @bimsarapathiraja @zerojin0603, Could you please provide the version of the diffusers library you're using? It would also be really helpful if you could share the files you're running for better context. If it's more convenient, feel free to send them to my email at [email protected].

Hello, Thank you for your response.

I’m currently using diffusers==0.19.3, as I encountered issues loading the models with earlier versions. For running, I’ve used a random 1024x1024 image file from the CelebA-HQ dataset.

Please modify the following code:

if net_.__class__.__name__ == 'CrossAttention':

to
if net_.__class__.__name__ == 'Attention':
This change is necessary because the CrossAttention class has been renamed to Attention in the newer versions of the diffusers library. We'll address version compatibility in a future update.
Additionally, MAG-Edit has only been tested with 512x512 images. I am unsure of its performance with higher resolutions, so if possible, please resize the 1024x1024 images to 512x512 for more reliable results.
Please let me know if this is helpful :)
Thanks!

@SuanToupoi
Copy link

SuanToupoi commented Oct 30, 2024 via email

@zerojin0603
Copy link

MAG-Edit/code_tr/ptp_utils.py

Thank you very much for your answer!
It worked well, though I did have to make some adjustments to the forward function.
The results look great even at 1024x1024.

Thanks again for your effort!

@CostaliyA CostaliyA reopened this Oct 30, 2024
@SuanToupoi
Copy link

Sorry, I would like to ask why I changed the code to “if net___ Class__. name=='CrossAttention ':” The following issue occurred afterwards

Traceback (most recent call last):
  File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 34, in <module>
    (image_gt, image_enc), x_t, uncond_embeddings = null_inversion.invert(args.img_path, args.source_prompt, offsets=(0, 0, 0, 0), verbose=True)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 759, in invert
    image_rec, ddim_latents = self.ddim_inversion(image_gt)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 716, in ddim_inversion
    ddim_latents = self.ddim_loop(latent)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 703, in ddim_loop
    noise_pred = self.get_noise_pred_single(latent, t, cond_embeddings)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 637, in get_noise_pred_single
    noise_pred = self.model.unet(latents, t, encoder_hidden_states=context)["sample"]
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1216, in forward
    sample, res_samples = downsample_block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1288, in forward
    hidden_states = attn(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\transformers\transformer_2d.py", line 442, in forward
    hidden_states = block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\attention.py", line 507, in forward
    attn_output = self.attn1(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states'

@Orannue
Copy link
Collaborator

Orannue commented Oct 30, 2024

Sorry, I would like to ask why I changed the code to “if net___ Class__. name=='CrossAttention ':” The following issue occurred afterwards

Traceback (most recent call last):
  File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 34, in <module>
    (image_gt, image_enc), x_t, uncond_embeddings = null_inversion.invert(args.img_path, args.source_prompt, offsets=(0, 0, 0, 0), verbose=True)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 759, in invert
    image_rec, ddim_latents = self.ddim_inversion(image_gt)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 716, in ddim_inversion
    ddim_latents = self.ddim_loop(latent)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 703, in ddim_loop
    noise_pred = self.get_noise_pred_single(latent, t, cond_embeddings)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 637, in get_noise_pred_single
    noise_pred = self.model.unet(latents, t, encoder_hidden_states=context)["sample"]
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1216, in forward
    sample, res_samples = downsample_block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1288, in forward
    hidden_states = attn(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\transformers\transformer_2d.py", line 442, in forward
    hidden_states = block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\attention.py", line 507, in forward
    attn_output = self.attn1(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states'

Please downgrade the version of the diffusers library, as the newer version has renamed the parameters. Alternatively, you can modify the following code:

def forward(x, context=None, mask=None):

to:
def forward(hidden_states, encoder_hidden_states=None, attention_mask=None):
and replace the x, context and mask with hidden_states, encoder_hidden_states and attention_mask in the function respectively.

@SuanToupoi
Copy link

I edited the code into the following format:

        def forward(hidden_states, encoder_hidden_states=None, attention_mask=None):
            batch_size, sequence_length, dim = hidden_states.shape

            h = self.heads
            q = self.to_q(hidden_states)
            is_cross = encoder_hidden_states is not None
            encoder_hidden_states = encoder_hidden_states if is_cross else hidden_states

            k = self.to_k(encoder_hidden_states)
            v = self.to_v(encoder_hidden_states)
            q = self.reshape_heads_to_batch_dim(q)
            k = self.reshape_heads_to_batch_dim(k)
            v = self.reshape_heads_to_batch_dim(v)

            sim = torch.einsum("b i d, b j d -> b i j", q, k) * self.scale

            if attention_mask is not None:
                attention_mask = attention_mask.reshape(batch_size, -1)
                max_neg_value = -torch.finfo(sim.dtype).max
                attention_mask = attention_mask[:, None, :].repeat(h, 1, 1)
                sim.masked_fill_(~attention_mask, max_neg_value)

            # attention, what we cannot get enough of
            attn = sim.softmax(dim=-1)
            attn = controller(attn, is_cross, place_in_unet)

            out = torch.einsum("b i j, b j d -> b i d", attn, v)
            out = self.reshape_batch_dim_to_heads(out)
            return to_out(out)

        return forward

Simultaneously set the following code “ if net_.class.name == 'Attention':”. At this point, my diffusers version is 0.19.3 and transformers version is 4.37.2 . Problem reoccurs during runtime:

Traceback (most recent call last):
  File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 34, in <module>
    (image_gt, image_enc), x_t, uncond_embeddings = null_inversion.invert(args.img_path, args.source_prompt, offsets=(0, 0, 0, 0), verbose=True)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 759, in invert
    image_rec, ddim_latents = self.ddim_inversion(image_gt)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 716, in ddim_inversion
    ddim_latents = self.ddim_loop(latent)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 703, in ddim_loop
    noise_pred = self.get_noise_pred_single(latent, t, cond_embeddings)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 637, in get_noise_pred_single
    noise_pred = self.model.unet(latents, t, encoder_hidden_states=context)["sample"]
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unet_2d_condition.py", line 915, in forward
    sample, res_samples = downsample_block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 996, in forward
    hidden_states = attn(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\transformer_2d.py", line 292, in forward
    hidden_states = block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\attention.py", line 155, in forward
    attn_output = self.attn1(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\ptp_utils.py", line 163, in forward
    q = self.reshape_heads_to_batch_dim(q)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1709, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'Attention' object has no attribute 'reshape_heads_to_batch_dim'

If you could take the time to read the email and help, I would greatly appreciate it.

@Orannue
Copy link
Collaborator

Orannue commented Oct 30, 2024

I edited the code into the following format:

        def forward(hidden_states, encoder_hidden_states=None, attention_mask=None):
            batch_size, sequence_length, dim = hidden_states.shape

            h = self.heads
            q = self.to_q(hidden_states)
            is_cross = encoder_hidden_states is not None
            encoder_hidden_states = encoder_hidden_states if is_cross else hidden_states

            k = self.to_k(encoder_hidden_states)
            v = self.to_v(encoder_hidden_states)
            q = self.reshape_heads_to_batch_dim(q)
            k = self.reshape_heads_to_batch_dim(k)
            v = self.reshape_heads_to_batch_dim(v)

            sim = torch.einsum("b i d, b j d -> b i j", q, k) * self.scale

            if attention_mask is not None:
                attention_mask = attention_mask.reshape(batch_size, -1)
                max_neg_value = -torch.finfo(sim.dtype).max
                attention_mask = attention_mask[:, None, :].repeat(h, 1, 1)
                sim.masked_fill_(~attention_mask, max_neg_value)

            # attention, what we cannot get enough of
            attn = sim.softmax(dim=-1)
            attn = controller(attn, is_cross, place_in_unet)

            out = torch.einsum("b i j, b j d -> b i d", attn, v)
            out = self.reshape_batch_dim_to_heads(out)
            return to_out(out)

        return forward

Simultaneously set the following code “ if net_.class.name == 'Attention':”. At this point, my diffusers version is 0.19.3 and transformers version is 4.37.2 . Problem reoccurs during runtime:

Traceback (most recent call last):
  File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 34, in <module>
    (image_gt, image_enc), x_t, uncond_embeddings = null_inversion.invert(args.img_path, args.source_prompt, offsets=(0, 0, 0, 0), verbose=True)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 759, in invert
    image_rec, ddim_latents = self.ddim_inversion(image_gt)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 716, in ddim_inversion
    ddim_latents = self.ddim_loop(latent)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 703, in ddim_loop
    noise_pred = self.get_noise_pred_single(latent, t, cond_embeddings)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 637, in get_noise_pred_single
    noise_pred = self.model.unet(latents, t, encoder_hidden_states=context)["sample"]
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unet_2d_condition.py", line 915, in forward
    sample, res_samples = downsample_block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 996, in forward
    hidden_states = attn(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\transformer_2d.py", line 292, in forward
    hidden_states = block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\attention.py", line 155, in forward
    attn_output = self.attn1(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\ptp_utils.py", line 163, in forward
    q = self.reshape_heads_to_batch_dim(q)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1709, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'Attention' object has no attribute 'reshape_heads_to_batch_dim'

If you could take the time to read the email and help, I would greatly appreciate it.

This issue may also be related to the version of the diffusers library. The reshape_heads_to_batch_dim function has been updated, as indicated here: https://github.com/huggingface/diffusers/blob/de9c72d58c3eb4bcd50901d0c23c7c46f0cec29b/src/diffusers/models/attention_processor.py#L337
Please try modifying the following code:

q = self.reshape_heads_to_batch_dim(q)

to:
q = self.head_to_batch_dim(q)
And make the same modification for the next two lines.
Additionally, please change the following code:
out = self.reshape_batch_dim_to_heads(out)

to:
out = self.batch_to_head_dim(out)
There are significant differences between the version of the diffusers library used in MAG-Edit and the latest version. We appreciate you bringing this issue to our attention, and we will update our code for compatibility. If you have any further questions, please feel free to contact us :)

@SuanToupoi
Copy link

Thank you for your suggestion, but even after applying the modifications you mentioned, I still get the same error message:

Traceback (most recent call last):
  File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 34, in <module>
    (image_gt, image_enc), x_t, uncond_embeddings = null_inversion.invert(args.img_path, args.source_prompt, offsets=(0, 0, 0, 0), verbose=True)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 759, in invert
    image_rec, ddim_latents = self.ddim_inversion(image_gt)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 716, in ddim_inversion
    ddim_latents = self.ddim_loop(latent)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 703, in ddim_loop
    noise_pred = self.get_noise_pred_single(latent, t, cond_embeddings)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 637, in get_noise_pred_single
    noise_pred = self.model.unet(latents, t, encoder_hidden_states=context)["sample"]
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unet_2d_condition.py", line 915, in forward
    sample, res_samples = downsample_block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)

Perhaps due to my environment not being installed properly or other reasons, I apologize for the inconvenience caused by my repeated inquiries to you regarding related issues.

@Orannue
Copy link
Collaborator

Orannue commented Oct 30, 2024

Thank you for your suggestion, but even after applying the modifications you mentioned, I still get the same error message:

Traceback (most recent call last):
  File "C:\stcode\MAG-Edit-main\code_tr\edit.py", line 34, in <module>
    (image_gt, image_enc), x_t, uncond_embeddings = null_inversion.invert(args.img_path, args.source_prompt, offsets=(0, 0, 0, 0), verbose=True)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 759, in invert
    image_rec, ddim_latents = self.ddim_inversion(image_gt)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 716, in ddim_inversion
    ddim_latents = self.ddim_loop(latent)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 703, in ddim_loop
    noise_pred = self.get_noise_pred_single(latent, t, cond_embeddings)
  File "C:\stcode\MAG-Edit-main\code_tr\network.py", line 637, in get_noise_pred_single
    noise_pred = self.model.unet(latents, t, encoder_hidden_states=context)["sample"]
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\diffusers\models\unet_2d_condition.py", line 915, in forward
    sample, res_samples = downsample_block(
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\anaconda\envs\magedit\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)

Perhaps due to my environment not being installed properly or other reasons, I apologize for the inconvenience caused by my repeated inquiries to you regarding related issues.

The error message appears to be incomplete. Could you please provide the full error message? Additionally, if possible, please share the code you’re running and details of your environment. You can send this information to my email at [email protected]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants