Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KeyError: 'sdpa' #318

Closed
miloskovacevic68 opened this issue Oct 24, 2024 · 10 comments
Closed

KeyError: 'sdpa' #318

miloskovacevic68 opened this issue Oct 24, 2024 · 10 comments

Comments

@miloskovacevic68
Copy link

I installed the newest version of marker-pdf with pip install marker-pdf in a newly created venv environment with python 3.10.
When i try to use marker_single from cli i get the error:

Loaded detection model vikp/surya_det3 on device cuda with dtype torch.float16
Loaded detection model vikp/surya_layout3 on device cuda with dtype torch.float16
Traceback (most recent call last):
File "/home/milos/PycharmProjects/raga_marker/venv/bin/marker_single", line 8, in
sys.exit(main())
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/convert_single.py", line 31, in main
model_lst = load_all_models()
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/marker/models.py", line 78, in load_all_models
order = setup_order_model(device, dtype)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/marker/models.py", line 66, in setup_order_model
model = load_order_model()
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/surya/model/ordering/model.py", line 27, in load_model
model = OrderVisionEncoderDecoderModel.from_pretrained(checkpoint, config=config, torch_dtype=dtype)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py", line 376, in from_pretrained
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4096, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py", line 199, in init
decoder = AutoModelForCausalLM.from_config(config.decoder)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 440, in from_config
return model_class._from_config(config, **kwargs)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1543, in _from_config
model = cls(config, **kwargs)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/surya/model/ordering/decoder.py", line 495, in init
self.model = MBartOrderDecoderWrapper(config)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/surya/model/ordering/decoder.py", line 480, in init
self.decoder = MBartOrderDecoder(config)
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/surya/model/ordering/decoder.py", line 294, in init
self.layers = nn.ModuleList([MBartOrderDecoderLayer(config) for _ in range(config.decoder_layers)])
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/surya/model/ordering/decoder.py", line 294, in
self.layers = nn.ModuleList([MBartOrderDecoderLayer(config) for _ in range(config.decoder_layers)])
File "/home/milos/PycharmProjects/raga_marker/venv/lib/python3.10/site-packages/surya/model/ordering/decoder.py", line 209, in init
self.self_attn = MBART_ATTENTION_CLASSES[config._attn_implementation](
KeyError: 'sdpa'

Please help!

@AdrianY0809
Copy link

The same problem

@xunmenglt
Copy link

I solve it by 'pip install transformers==4.45.2'

@MarcinRutecki
Copy link

@xunmenglt thank you. It worked

@johmicrot
Copy link

I also confirm @xunmenglt solved the issue for me.

@VikParuchuri
Copy link
Owner

Yes, this is a bug with transformers 4.46. Will patch today

@Jensssen
Copy link

This is what I call realtime-bug-fixing. Amazing! 🥇

@VikParuchuri
Copy link
Owner

VikParuchuri commented Oct 25, 2024

See #319 and VikParuchuri/surya#226, will merge shortly

@miloskovacevic68
Copy link
Author

miloskovacevic68 commented Oct 25, 2024 via email

@WaleedChughtai30
Copy link

I solve it by 'pip install transformers==4.45.2'

worked for me too!

1 similar comment
@bineea
Copy link

bineea commented Nov 22, 2024

I solve it by 'pip install transformers==4.45.2'

worked for me too!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants