forked from vllm-project/vllm
-
Notifications
You must be signed in to change notification settings - Fork 64
Issues: HabanaAI/vllm-fork
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Bug]: RuntimeError: Failed to infer device type on Gaudi 3 System
bug
Something isn't working
#666
opened Jan 3, 2025 by
winniechj
1 task done
[Bug]: Device Type HPU is not supported for torch.generator() API
bug
Something isn't working
#627
opened Dec 12, 2024 by
nageshdn
1 task done
[Bug]: Cannot Run Qwen2 Embedding Model on Gaudi
bug
Something isn't working
#583
opened Dec 4, 2024 by
rvoleti89
1 task done
[Bug]: the generated text on BFloat16 is not as good as that on Float32.
bug
Something isn't working
#443
opened Oct 29, 2024 by
ccrhx4
1 task done
[Bug]: MQLLMEngine dies after a period of inactivity
bug
Something isn't working
#416
opened Oct 23, 2024 by
Xaenalt
1 task done
[RFC]: change VLLM_DECODE_BLOCK_BUCKET_* design to fit small AND large batch size at one warmup
intel
Issues or PRs submitted by Intel
stale
#328
opened Sep 24, 2024 by
ccrhx4
1 task done
[Usage]: vllm can't run qwen 32B inference
external
Issues or PRs submitted by external users
#193
opened Aug 17, 2024 by
kunger97
ProTip!
Follow long discussions with comments:>50.