You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
5. Please use English, otherwise it will be closed.
Describe the bug
when I run benchmark, it means failed when request rate is higher
[2024-11-20 16:40:55 TP0] Traceback (most recent call last):
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 1196, in run_scheduler_process
scheduler.event_loop_normal()
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 325, in event_loop_normal
result = self.run_batch(batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 773, in run_batch
logits_output, next_token_ids = self.tp_worker.forward_batch_generation(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/tp_worker.py", line 139, in forward_batch_generation
logits_output = self.model_runner.forward(forward_batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 579, in forward
return self.forward_extend(forward_batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 561, in forward_extend
self.attn_backend.init_forward_metadata(forward_batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 145, in init_forward_metadata
self.indices_updater_prefill.update(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 511, in update_single_wrapper
self.call_begin_forward(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 621, in call_begin_forward
wrapper_paged.begin_forward(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/flashinfer/prefill.py", line 832, in plan
self._wrapper.plan(
RuntimeError: Failed to allocate memory for batch_prefill_tmp_v with size 435814400 and alignment 16 in AlignedAllocator
Reproduction
version: 0.3.5
model: qwen2.5-7b-instruct
server command: python -m sglang.launch_server --trust-remote-code --disable-radix-cache --chat-template /root/llm/vllm-inference/model/template/template_qwen.jinja --log-level info --model-path /root/.cache/modelscope/hub/Qwen/Qwen2___5-7B-Instruct --host 0.0.0.0 --port 50050
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 65535
The text was updated successfully, but these errors were encountered:
Checklist
Describe the bug
when I run benchmark, it means failed when request rate is higher
[2024-11-20 16:40:55 TP0] Traceback (most recent call last):
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 1196, in run_scheduler_process
scheduler.event_loop_normal()
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 325, in event_loop_normal
result = self.run_batch(batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 773, in run_batch
logits_output, next_token_ids = self.tp_worker.forward_batch_generation(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/managers/tp_worker.py", line 139, in forward_batch_generation
logits_output = self.model_runner.forward(forward_batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 579, in forward
return self.forward_extend(forward_batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 561, in forward_extend
self.attn_backend.init_forward_metadata(forward_batch)
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 145, in init_forward_metadata
self.indices_updater_prefill.update(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 511, in update_single_wrapper
self.call_begin_forward(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 621, in call_begin_forward
wrapper_paged.begin_forward(
File "/root/miniconda3/envs/sglang-backend/lib/python3.10/site-packages/flashinfer/prefill.py", line 832, in plan
self._wrapper.plan(
RuntimeError: Failed to allocate memory for batch_prefill_tmp_v with size 435814400 and alignment 16 in AlignedAllocator
Reproduction
version: 0.3.5
model: qwen2.5-7b-instruct
server command: python -m sglang.launch_server --trust-remote-code --disable-radix-cache --chat-template /root/llm/vllm-inference/model/template/template_qwen.jinja --log-level info --model-path /root/.cache/modelscope/hub/Qwen/Qwen2___5-7B-Instruct --host 0.0.0.0 --port 50050
benchmark args: --dataset-name random --random-input-len 1000 --tokenizer /root/.cache/modelscope/hub/Qwen/Qwen2___5-7B-Instruct/ --num-prompts 480 --request-rate 8
Environment
Python: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]
CUDA available: True
GPU 0: NVIDIA H20
GPU 0 Compute Capability: 9.0
CUDA_HOME: /usr/local
NVCC: Cuda compilation tools, release 12.5, V12.5.40
CUDA Driver Version: 550.90.07
PyTorch: 2.4.0+cu121
sglang: 0.3.5
flashinfer: 0.1.6+cu121torch2.4
triton: 3.0.0
transformers: 4.45.2
requests: 2.32.3
tqdm: 4.66.5
numpy: 1.26.4
aiohttp: 3.10.9
fastapi: 0.115.0
hf_transfer: 0.1.8
huggingface_hub: 0.25.2
interegular: 0.3.3
packaging: 24.1
PIL: 10.4.0
psutil: 6.0.0
pydantic: 2.9.2
uvicorn: 0.31.1
uvloop: 0.20.0
zmq: 26.2.0
vllm: 0.6.3.post1
multipart: 0.0.12
openai: 1.51.2
anthropic: 0.36.0
NVIDIA Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-15 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 65535
The text was updated successfully, but these errors were encountered: