Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mps failure with tts: IndexError: tuple index out of range in pytorch_utils.py #33786

Closed
2 of 4 tasks
ajkessel opened this issue Sep 28, 2024 · 17 comments · Fixed by #34538
Closed
2 of 4 tasks

mps failure with tts: IndexError: tuple index out of range in pytorch_utils.py #33786

ajkessel opened this issue Sep 28, 2024 · 17 comments · Fixed by #34538
Labels

Comments

@ajkessel
Copy link

System Info

  • transformers version: 4.46.0.dev0
  • Platform: macOS-15.0-x86_64-i386-64bit
  • Python version: 3.11.10
  • Huggingface_hub version: 0.25.1
  • Safetensors version: 0.4.5
  • Accelerate version: not installed
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.2.2 (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

I'm not sure if this is a transformers bug, a coqui-ai bug, or just a lack of mps support for what I'm trying to do.

Same result whether PYTORCH_ENABLE_MPS_FALLBACK is set or not.

Python code:

from TTS.api import TTS
tts = TTS(model_name='multi-dataset/xtts_v2/en',progress_bar=True).to('mps')
tts.tts_to_file( text = "The quick brown fox jumped over the lazy dog.", speaker='Annmarie Nele', language='en', file_path='out.wav')

result:

  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/TTS/api.py", line 334, in tts_to_file
    wav = self.tts(
          ^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/TTS/api.py", line 276, in tts
    wav = self.synthesizer.tts(
          ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/TTS/utils/synthesizer.py", line 386, in tts
    outputs = self.tts_model.synthesize(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 412, in synthesize
    return self.inference(text, language, gpt_cond_latent, speaker_embedding, **settings)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/TTS/tts/models/xtts.py", line 541, in inference
    gpt_codes = self.gpt.generate(
                ^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/TTS/tts/layers/xtts/gpt.py", line 590, in generate
    gen = self.gpt_inference.generate(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 1829, in generate
    self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device)
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 1678, in _prepare_special_tokens
    and isin_mps_friendly(elements=eos_token_tensor, test_elements=pad_token_tensor).any()
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/adam/dev/.venv/lib/python3.11/site-packages/transformers/pytorch_utils.py", line 325, in isin_mps_friendly
    return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()
                         ~~~~~~~~~~~~~~~~~~~^^^
IndexError: tuple index out of range

I've also reported this as issue 3998 on coqui-ai.

Expected behavior

Successful execution.

@ajkessel ajkessel added the bug label Sep 28, 2024
@Swastik-Swarup-Dash
Copy link

Hey@ajkessel I Think this can work

import torch
from TTS.api import TTS
from transformers import pytorch_utils
def patched_isin_mps_friendly(elements, test_elements):
    if test_elements.ndim == 0:
        test_elements = test_elements.unsqueeze(0)
    return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()

pytorch_utils.isin_mps_friendly = patched_isin_mps_friendly

tts = TTS(model_name='multi-dataset/xtts_v2/en', progress_bar=True).to('mps')
tts.tts_to_file(
    text="The quick brown fox jumped over the lazy dog.",
    speaker='Annmarie Nele',
    language='en',
    file_path='out.wav'
)

@markuskreitzer
Copy link

markuskreitzer commented Sep 30, 2024

It seems like everything I'm running on my Macbook Pro M1 with the transformers lib is broken now. I'm using Python 3.10. This patch fixes it! Thanks!!!

@markuskreitzer
Copy link

@ajkessel This seem to be broken for me on any of the official examples I've used for Llama and Qwen inference models.

@ajkessel
Copy link
Author

ajkessel commented Sep 30, 2024

I tried @Swastik-Swarup-Dash 's workaround, got this error:

NotImplementedError: The operator 'aten::upsample_linear1d.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

To the extent it's relevant:

Model Name: iMac
Model Identifier: iMac20,2
Processor Name: 10-Core Intel Core i9
Processor Speed: 3.6 GHz
Number of Processors: 1
Total Number of Cores: 10
L2 Cache (per Core): 256 KB
L3 Cache: 20 MB
Hyper-Threading Technology: Enabled
Memory: 32 GB

Although at least with this workaround, setting PYTORCH_ENABLE_MPS_FALLBACK=1 does avoid the exception. It just looks like it's not using GPU at all then.

@zachmayer
Copy link

I'm seeing the same issue with MPS inference. CPU inference works fine.

@Swastik-Swarup-Dash — maybe you could make a pull request with your patch!

@Swastik-Swarup-Dash
Copy link

@zachmayer let me give a try

@Swastik-Swarup-Dash
Copy link

@ajkessel You can try this

export PYTORCH_ENABLE_MPS_FALLBACK=1

You can set the environment variable within your script using the os module:

import os
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"

Run the TTS Model

tts = TTS(model_name='multi-dataset/xtts_v2/en', progress_bar=True).to('mps')
tts.tts_to_file(
    text="The quick brown fox jumped over the lazy dog.",
    file_path="output.wav"
)

Maybe this can work and make sure your macOs version is upto date MPS support is only available in macOS 12.3 and later.
or if none works switch to Cuda

tts = TTS(model_name='multi-dataset/xtts_v2/en', progress_bar=True).to('cuda')

@LysandreJik
Copy link
Member

cc @ArthurZucker

@ajkessel
Copy link
Author

ajkessel commented Oct 2, 2024

With transformers==4.45.1 and tortoise-tts, I get the same IndexError: tuple index out of range error.

With transformers==4.31.0 (the version requested by tortoise-tts), instead I get RuntimeError: MPS backend out of memory.

transformers-4.31.0-error.txt
transformers-4.45.1-error.txt

@ArthurZucker
Copy link
Collaborator

cc @eustlb seems like self.stop_mel_token is None (not an MPS issue )

@ajkessel
Copy link
Author

ajkessel commented Oct 3, 2024

For what it's worth, all this same code works fine for me on a Windows box with cuda (both in Linux (WSL) and native Windows). So even if it's not a mps issue, it seems to be Mac-specific.

@pistudios
Copy link

Hey@ajkessel I Think this can work

import torch
from transformers import pytorch_utils
def patched_isin_mps_friendly(elements, test_elements):
    if test_elements.ndim == 0:
        test_elements = test_elements.unsqueeze(0)
    return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()

pytorch_utils.isin_mps_friendly = patched_isin_mps_friendly

You’re a lifesaver! I’ve been struggling for the past few days with Florence 2 workflow on MPS, which suddenly stopped working, I encountered the same error, and using the method you provided to patch the pytorch_utils.isin_mps_friendly , I was able to solve it! Thank you so much!

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@alexaaaaaander
Copy link

Hey@ajkessel I Think this can work

import torch
from transformers import pytorch_utils
def patched_isin_mps_friendly(elements, test_elements):
    if test_elements.ndim == 0:
        test_elements = test_elements.unsqueeze(0)
    return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()

pytorch_utils.isin_mps_friendly = patched_isin_mps_friendly

You’re a lifesaver! I’ve been struggling for the past few days with Florence 2 workflow on MPS, which suddenly stopped working, I encountered the same error, and using the method you provided to patch the pytorch_utils.isin_mps_friendly , I was able to solve it! Thank you so much!

I'm such a novice, that I don't even know where to implement this line of code... anyone have any suggestions?? (am using Florence 2 on MPS, as well"

@pistudios
Copy link

Hey@ajkessel I Think this can work

import torch
from transformers import pytorch_utils
def patched_isin_mps_friendly(elements, test_elements):
    if test_elements.ndim == 0:
        test_elements = test_elements.unsqueeze(0)
    return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()

pytorch_utils.isin_mps_friendly = patched_isin_mps_friendly

You’re a lifesaver! I’ve been struggling for the past few days with Florence 2 workflow on MPS, which suddenly stopped working, I encountered the same error, and using the method you provided to patch the pytorch_utils.isin_mps_friendly , I was able to solve it! Thank you so much!

I'm such a novice, that I don't even know where to implement this line of code... anyone have any suggestions?? (am using Florence 2 on MPS, as well"

Did you use the ComfyUI_Florence2 ? You can find the node.py file and add the following content at the beginning of the code:

from transformers import pytorch_utils
def patched_isin_mps_friendly(elements, test_elements):
if test_elements.ndim == 0:
test_elements = test_elements.unsqueeze(0)
return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()
pytorch_utils.isin_mps_friendly = patched_isin_mps_friendly

The main purpose is to replace the original pytorch_utils.isin_mps_friendly with patched_isin_mps_friendly, so that all subsequent calls to pytorch_utils.isin_mps_friendly will use the patched version. Hope this helps you out !

@gante
Copy link
Member

gante commented Nov 4, 2024

the suggestion by @Swastik-Swarup-Dash was added to pytorch_utils.isin_mps_friendly, there should be no further need to monkey patch it from today onwards 🤗

@alexgenovese
Copy link

@gante I updated Florence2, and it throws the same issue; maybe I missed something. Could you help me with this?

2024-11-13T11:38:29.723009 - !!! Exception during processing !!! tuple index out of range
2024-11-13T11:38:29.723566 - Traceback (most recent call last):
File "/Users/alexgenovese/Desktop/ComfyUI/execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alexgenovese/Desktop/ComfyUI/execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alexgenovese/Desktop/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/Users/alexgenovese/Desktop/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alexgenovese/Desktop/ComfyUI/custom_nodes/ComfyUI-Florence2/nodes.py", line 302, in encode
generated_ids = model.generate(
^^^^^^^^^^^^^^^
File "/Users/alexgenovese/.cache/huggingface/modules/transformers_modules/CogFlorence-2.2-Large/modeling_florence2.py", line 2796, in generate
return self.language_model.generate(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alexgenovese/Desktop/ComfyUI/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/alexgenovese/Desktop/ComfyUI/venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 1828, in generate
self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device)
File "/Users/alexgenovese/Desktop/ComfyUI/venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 1677, in _prepare_special_tokens
and isin_mps_friendly(elements=eos_token_tensor, test_elements=pad_token_tensor).any()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alexgenovese/Desktop/ComfyUI/venv/lib/python3.12/site-packages/transformers/pytorch_utils.py", line 325, in isin_mps_friendly
return elements.tile(test_elements.shape[0], 1).eq(test_elements.unsqueeze(1)).sum(dim=0).bool().squeeze()
~~~~~~~~~~~~~~~~~~~^^^
IndexError: tuple index out of range

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants