Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception during processing !!! AttnProcessor2_0.__call__() got multiple values for argument 'encoder_hidden_states' #18

Open
StartHua opened this issue Dec 10, 2024 · 3 comments

Comments

@StartHua
Copy link

Exception during processing !!! AttnProcessor2_0.call() got multiple values for argument 'encoder_hidden_states'

@libhot
Copy link

libhot commented Dec 12, 2024

same error message ,when set inference.yaml : enable_xformers_memory_efficient_attention: false

File "/share/ai/memo-main/inference.py", line 258, in
main()
File "/share/ai/memo-main/inference.py", line 234, in main
pipeline_output = pipeline(
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/share/ai/memo-main/memo/pipelines/video_pipeline.py", line 260, in call
noise_pred = self.diffusion_net(
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/share/ai/memo-main/memo/models/unet_3d.py", line 484, in forward
sample, res_samples, audio_embedding = downsample_block(
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/share/ai/memo-main/memo/models/unet_3d_blocks.py", line 562, in forward
hidden_states = motion_module(
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/share/ai/memo-main/memo/models/motion_module.py", line 69, in forward
hidden_states = self.temporal_transformer(
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/share/ai/memo-main/memo/models/motion_module.py", line 172, in forward
hidden_states = block(
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/share/ai/memo-main/memo/models/motion_module.py", line 253, in forward
attention_block(
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/miniconda3/envs/memo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/share/ai/memo-main/memo/models/motion_module.py", line 371, in forward
hidden_states = self.processor(
TypeError: AttnProcessor2_0.call() got multiple values for argument 'encoder_hidden_states'

@wallkop
Copy link

wallkop commented Dec 20, 2024

set enable_xformers_memory_efficient_attention=true

@shirubei
Copy link

#6 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants