You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
Information about min_p in request is not used during inference:
Reproduction
Create a request with min_p and check logs if parameter was used:
Checklist
Describe the bug
Information about min_p in request is not used during inference:
Reproduction
Create a request with min_p and check logs if parameter was used:
Environment
docker run --runtime nvidia --gpus all --shm-size 64g -d --name lmdeploy-QWEN_2.5_32B --restart unless-stopped -v ~/.cache/huggingface:/root/.cache/huggingface -p 8000:8000 openmmlab/lmdeploy:latest-cu12 lmdeploy serve api_server --server-port 8000 --model-name Qwen_2.5 --backend turbomind --model-format awq --enable-prefix-caching --log-level DEBUG Qwen/Qwen2.5-32B-Instruct-AWQ
Error traceback
No response
The text was updated successfully, but these errors were encountered: