-
Notifications
You must be signed in to change notification settings - Fork 538
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Qwen2-VL-7B with sglang Performance Degradation on MME benchmark #2112
Comments
Hello, can you give a vllm test result? It may be more helpful. Thanks |
Hi, thank you for your reply in time.
And I used the official Qwen2-VL-7B API from Aliyun, got a total score of around 470. |
Sorry for the bad performance. |
Oh, this is terrible...! Have you reported to vLLM ? @Mr-Loevan |
Hmm, I am able to reproduce an issue where I see different logprobs between a hugging face implementation and calling sglang vs vllm:
What I see is a discrepancy in both vllm and sglang, but sglang, at least when decoding with temperature 0 does yield the same answer after 100 tokens, but by then sglang has already gone down a different path. |
I've checked, alignment is much better if you don't include an image in the query:
|
Checklist
Describe the bug
Qwen2-VL-7B cannot reproduce its performance in MME using sglang, while pure transformers can.
Reproduction
Both decode with temperature==0, I suspect it's possibly with the input image.
By the way, I also tested vllm and got degraded performance, like sglang.
sglang
=========== Cognition ===========
total score: 474.2857142857143
commonsense_reasoning score: 144.28571428571428
numerical_calculation score: 72.5
text_translation score: 162.5
code_reasoning score: 95.0
transformers
=========== Cognition ===========
total score: 633.5714285714286
commonsense_reasoning score: 148.57142857142856
numerical_calculation score: 125.0
text_translation score: 200.0
code_reasoning score: 160.0
Environment
Python: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0]
CUDA available: True
GPU 0,1: NVIDIA A100-SXM4-80GB
GPU 0,1 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
CUDA Driver Version: 535.129.03
PyTorch: 2.4.0+cu121
sglang: 0.3.5.post2
flashinfer: 0.1.6+cu121torch2.4
triton: 3.0.0
transformers: 4.46.3
requests: 2.32.3
tqdm: 4.67.0
numpy: 1.26.4
aiohttp: 3.11.6
fastapi: 0.115.5
hf_transfer: 0.1.8
huggingface_hub: 0.26.2
interegular: 0.3.3
packaging: 24.2
PIL: 10.4.0
psutil: 6.1.0
pydantic: 2.9.2
uvicorn: 0.32.0
uvloop: 0.21.0
zmq: 26.2.0
vllm: 0.6.3.post1
multipart: 0.0.17
openai: 1.54.5
anthropic: 0.39.0
NVIDIA Topology:
GPU0 GPU1 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 NIC12 NIC13 NIC14 NIC15 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 SYS SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PXB SYS PXB SYS SYS 0-31,64-91 0 N/A
GPU1 NV12 X PXB PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PXB SYS 36-63,100-101 1 N/A
NIC0 SYS PXB X PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS
NIC1 SYS PXB PIX X PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS
NIC2 SYS PXB PIX PIX X SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS
NIC3 SYS SYS SYS SYS SYS X PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX
NIC4 SYS SYS SYS SYS SYS PIX X PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX
NIC5 SYS SYS SYS SYS SYS PIX PIX X SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX
NIC6 SYS SYS SYS SYS SYS SYS SYS SYS X PIX PIX SYS SYS SYS PIX SYS SYS SYS
NIC7 SYS SYS SYS SYS SYS SYS SYS SYS PIX X PIX SYS SYS SYS PIX SYS SYS SYS
NIC8 SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX X SYS SYS SYS PIX SYS SYS SYS
NIC9 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X PIX PIX SYS PIX SYS SYS
NIC10 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX X PIX SYS PIX SYS SYS
NIC11 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX X SYS PIX SYS SYS
NIC12 SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX PIX SYS SYS SYS X SYS SYS SYS
NIC13 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX PIX SYS X SYS SYS
NIC14 SYS PXB PIX PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X SYS
NIC15 SYS SYS SYS SYS SYS PIX PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NIC9: mlx5_9
NIC10: mlx5_10
NIC11: mlx5_11
NIC12: mlx5_bond_0
NIC13: mlx5_bond_1
NIC14: mlx5_bond_2
NIC15: mlx5_bond_3
ulimit soft: 655350
The text was updated successfully, but these errors were encountered: