Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Qwen2-VL-7B with sglang Performance Degradation on MME benchmark #2112

Open
5 tasks done
Mr-Loevan opened this issue Nov 21, 2024 · 6 comments
Open
5 tasks done
Assignees

Comments

@Mr-Loevan
Copy link

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

Qwen2-VL-7B cannot reproduce its performance in MME using sglang, while pure transformers can.

Reproduction

Both decode with temperature==0, I suspect it's possibly with the input image.
By the way, I also tested vllm and got degraded performance, like sglang.
sglang
=========== Cognition ===========
total score: 474.2857142857143
commonsense_reasoning score: 144.28571428571428
numerical_calculation score: 72.5
text_translation score: 162.5
code_reasoning score: 95.0

transformers
=========== Cognition ===========
total score: 633.5714285714286
commonsense_reasoning score: 148.57142857142856
numerical_calculation score: 125.0
text_translation score: 200.0
code_reasoning score: 160.0

Environment

Python: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0]
CUDA available: True
GPU 0,1: NVIDIA A100-SXM4-80GB
GPU 0,1 Compute Capability: 8.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
CUDA Driver Version: 535.129.03
PyTorch: 2.4.0+cu121
sglang: 0.3.5.post2
flashinfer: 0.1.6+cu121torch2.4
triton: 3.0.0
transformers: 4.46.3
requests: 2.32.3
tqdm: 4.67.0
numpy: 1.26.4
aiohttp: 3.11.6
fastapi: 0.115.5
hf_transfer: 0.1.8
huggingface_hub: 0.26.2
interegular: 0.3.3
packaging: 24.2
PIL: 10.4.0
psutil: 6.1.0
pydantic: 2.9.2
uvicorn: 0.32.0
uvloop: 0.21.0
zmq: 26.2.0
vllm: 0.6.3.post1
multipart: 0.0.17
openai: 1.54.5
anthropic: 0.39.0
NVIDIA Topology:
GPU0 GPU1 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 NIC12 NIC13 NIC14 NIC15 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 SYS SYS SYS SYS SYS SYS SYS SYS SYS PXB PXB PXB SYS PXB SYS SYS 0-31,64-91 0 N/A
GPU1 NV12 X PXB PXB PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PXB SYS 36-63,100-101 1 N/A
NIC0 SYS PXB X PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS
NIC1 SYS PXB PIX X PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS
NIC2 SYS PXB PIX PIX X SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS
NIC3 SYS SYS SYS SYS SYS X PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX
NIC4 SYS SYS SYS SYS SYS PIX X PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX
NIC5 SYS SYS SYS SYS SYS PIX PIX X SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX
NIC6 SYS SYS SYS SYS SYS SYS SYS SYS X PIX PIX SYS SYS SYS PIX SYS SYS SYS
NIC7 SYS SYS SYS SYS SYS SYS SYS SYS PIX X PIX SYS SYS SYS PIX SYS SYS SYS
NIC8 SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX X SYS SYS SYS PIX SYS SYS SYS
NIC9 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X PIX PIX SYS PIX SYS SYS
NIC10 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX X PIX SYS PIX SYS SYS
NIC11 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX X SYS PIX SYS SYS
NIC12 SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX PIX SYS SYS SYS X SYS SYS SYS
NIC13 PXB SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX PIX PIX SYS X SYS SYS
NIC14 SYS PXB PIX PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS SYS X SYS
NIC15 SYS SYS SYS SYS SYS PIX PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
NIC8: mlx5_8
NIC9: mlx5_9
NIC10: mlx5_10
NIC11: mlx5_11
NIC12: mlx5_bond_0
NIC13: mlx5_bond_1
NIC14: mlx5_bond_2
NIC15: mlx5_bond_3

ulimit soft: 655350

@yizhang2077 yizhang2077 self-assigned this Nov 21, 2024
@yizhang2077
Copy link
Collaborator

Hello, can you give a vllm test result? It may be more helpful. Thanks

@Mr-Loevan
Copy link
Author

Mr-Loevan commented Nov 21, 2024

Hello, can you give a vllm test result? It may be more helpful. Thanks

Hi, thank you for your reply in time.

vllm test results
=========== Cognition ===========
total score: 479.64285714285717 
         commonsense_reasoning  score: 142.14285714285717
         numerical_calculation  score: 80.0
         text_translation  score: 162.5
         code_reasoning  score: 95.0

And I used the official Qwen2-VL-7B API from Aliyun, got a total score of around 470.
It is an unusual yet significant issue for qwen-vl users.

@Mr-Loevan Mr-Loevan changed the title [Bug] Qwen2-VL-7B with sglang Performance Degradation [Bug] Qwen2-VL-7B with sglang Performance Degradation on MME benchmark Nov 21, 2024
@yizhang2077
Copy link
Collaborator

yizhang2077 commented Nov 21, 2024

Hello, can you give a vllm test result? It may be more helpful. Thanks

Hi, thank you for your reply in time.

vllm test results
=========== Cognition ===========
total score: 479.64285714285717 
         commonsense_reasoning  score: 142.14285714285717
         numerical_calculation  score: 80.0
         text_translation  score: 162.5
         code_reasoning  score: 95.0

And I used the official Qwen2-VL-7B API from Aliyun, got a total score of around 470. It is an unusual yet significant issue for qwen-vl users.

Sorry for the bad performance.
Qwen2-VL in sglang is mainly modified based on the logic of vllm. Since vllm score is close to sglang, I think the main problem may be vllm implementation.
I will follow vllm community and try to modify qwen2vl based from transformers.

@thusinh1969
Copy link

thusinh1969 commented Nov 23, 2024

Oh, this is terrible...! Have you reported to vLLM ? @Mr-Loevan

@jakep-allenai
Copy link
Contributor

Hmm, I am able to reproduce an issue where I see different logprobs between a hugging face implementation and calling sglang vs vllm:

sglang run 1 v0.3.6
Top 5 tokens and their log probabilities:
HF Token: Ġ( Log Prob: -0.58 Prob 56.04%  SGLANG Token , Logprob -0.64 Prob 52.98%
HF Token: , Log Prob: -0.83 Prob 43.64%  SGLANG Token  ( Logprob -0.76 Prob 46.75%
HF Token: Ġin Log Prob: -7.20 Prob 0.07%  SGLANG Token  was Logprob -6.76 Prob 0.12%
HF Token: Ġwas Log Prob: -7.33 Prob 0.07%  SGLANG Token  in Logprob -7.64 Prob 0.05%
HF Token: Ġor Log Prob: -7.95 Prob 0.04%  SGLANG Token  or Logprob -8.64 Prob 0.02%

vllm run1 '0.6.4.post1'
Top 5 tokens and their log probabilities:
HF Token: Ġ( Log Prob: -0.58 Prob 56.04%  SGLANG Token  ( Logprob -0.35 Prob 70.41%
HF Token: , Log Prob: -0.83 Prob 43.64%  SGLANG Token , Logprob -1.23 Prob 29.35%
HF Token: Ġin Log Prob: -7.20 Prob 0.07%  SGLANG Token  ([ Logprob -7.73 Prob 0.04%
HF Token: Ġwas Log Prob: -7.33 Prob 0.07%  SGLANG Token  in Logprob -7.85 Prob 0.04%
HF Token: Ġor Log Prob: -7.95 Prob 0.04%  SGLANG Token  or Logprob -7.98 Prob 0.03%

What I see is a discrepancy in both vllm and sglang, but sglang, at least when decoding with temperature 0 does yield the same answer after 100 tokens, but by then sglang has already gone down a different path.

@jakep-allenai
Copy link
Contributor

I've checked, alignment is much better if you don't include an image in the query:

Top 5 tokens and their log probabilities:
HF Token: Ġc Log Prob: -0.53 Prob 59.07%  SGLANG Token  c Logprob -0.53 Prob 59.09%
HF Token: Ġ[ Log Prob: -1.03 Prob 35.83%  SGLANG Token  [ Logprob -1.03 Prob 35.84%
HF Token: Ġ\ Log Prob: -4.03 Prob 1.78%  SGLANG Token  \ Logprob -4.03 Prob 1.78%
HF Token: Ġ\\ Log Prob: -4.40 Prob 1.23%  SGLANG Token  \\ Logprob -4.40 Prob 1.23%
HF Token: [ Log Prob: -5.21 Prob 0.54%  SGLANG Token [ Logprob -5.21 Prob 0.54%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants
@yizhang2077 @thusinh1969 @Mr-Loevan @jakep-allenai and others