-
Notifications
You must be signed in to change notification settings - Fork 548
Issues: QwenLM/Qwen2.5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Bug]: Qwen2.5-72B-instruct使用vllm部署通过函数调用输出的结果里所有汉字被转义了
#1009
opened Oct 11, 2024 by
ericg108
4 tasks done
[Badcase]: qwen2.5-72b 在昇腾910推理结果不符合预期
help wanted
Extra attention is needed
#992
opened Sep 28, 2024 by
tianshiyisi
4 tasks done
[Badcase]: Qwen2.5-72B-Instruct-GPTQ-Int4 input_size_per_partition
#986
opened Sep 26, 2024 by
hyliush
4 tasks done
[Badcase]: Qwen2.5 14B Instruct can't stop generation
enhancement
New feature or request
#985
opened Sep 26, 2024 by
Jeremy-Hibiki
4 tasks done
[REQUEST]: working tool call configuration with localai
#949
opened Sep 23, 2024 by
codefromthecrypt
4 of 5 tasks
[Badcase]: Model inference Qwen2.5-32B-Instruct-GPTQ-Int4 appears as garbled text !!!!!!!!!!!!!!!!!!
#945
opened Sep 23, 2024 by
zhanaali
4 tasks done
[Badcase]: Poor quality of outputs in Russian language
enhancement
New feature or request
#928
opened Sep 19, 2024 by
ilya-corneli
4 tasks done
[Badcase]: 用ollama运行 qwen 2.5. 结尾回复会一直重复
enhancement
New feature or request
#925
opened Sep 19, 2024 by
zjttfs
4 tasks done
[Badcase]: Loss does not drop when using Liger Kernel at Qwen2.5
#921
opened Sep 19, 2024 by
Se-Hun
4 tasks done
[Badcase]: repetition after the conversation and send endless emoji (Ollama + OpenwebUI)
#920
opened Sep 19, 2024 by
valleysprings
4 tasks done
[Badcase]: Poor quality of outputs in Polish language
enhancement
New feature or request
#919
opened Sep 19, 2024 by
gileneusz
4 tasks done
[Badcase]: Fine tuning Qwen2-72b-Instruct using QLoRA with bnb, model indicators grad_norm=Nan; loss=0
#912
opened Sep 14, 2024 by
Jackmoyu001
4 tasks done
Value Error when loading quantized Qwen2-72B INT4 model using vLLM on multiple GPUs
#907
opened Sep 11, 2024 by
venki-lfc
Inference issue about qwen2-72b-instruct
enhancement
New feature or request
#900
opened Sep 10, 2024 by
TuTuHuss
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-09-11.