You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
Describe the bug
I'm using the docker version 0.6.1-cu12 to deploy qwen2-vl-7b local
Reproduction
docker run --runtime nvidia --gpus all
-v ~/.cache/huggingface:/root/.cache/huggingface
--env "HUGGING_FACE_HUB_TOKEN="
-p 23333:23333
--ipc=host
openmmlab/lmdeploy:latest
lmdeploy serve api_server qwen2-vl-7b
Environment
cuda12 A100 40g
Error traceback
Unrecognized keys in`rope_scaling`for'rope_type'='default': {'mrope_section'}
Unrecognized keys in`rope_scaling`for'rope_type'='default': {'mrope_section'}
2024-10-22 01:36:08,354 - lmdeploy - �[33mWARNING�[0m - archs.py:53 - Fallback to pytorch engine because `/data/qwen-vl-7b` not supported by turbomind engine.
Unrecognized keys in`rope_scaling`for'rope_type'='default': {'mrope_section'}
Unrecognized keys in`rope_scaling`for'rope_type'='default': {'mrope_section'}
2024-10-22 01:36:09,521 - lmdeploy - �[31mERROR�[0m - builder.py:58 - matching vision model: Qwen2VLModel failed
Traceback (most recent call last):
File "/opt/lmdeploy/lmdeploy/vl/model/qwen2.py", line 15, in check_qwen_vl_deps_install
import qwen_vl_utils # noqa: F401
ModuleNotFoundError: No module named 'qwen_vl_utils'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/py3/bin/lmdeploy", line 33, in<module>
sys.exit(load_entry_point('lmdeploy', 'console_scripts', 'lmdeploy')())
File "/opt/lmdeploy/lmdeploy/cli/entrypoint.py", line 42, in run
args.run(args)
File "/opt/lmdeploy/lmdeploy/cli/serve.py", line 329, in api_server
run_api_server(args.model_path,
File "/opt/lmdeploy/lmdeploy/serve/openai/api_server.py", line 1045, in serve
VariableInterface.async_engine = pipeline_class(
File "/opt/lmdeploy/lmdeploy/serve/vl_async_engine.py", line 24, in __init__
self.vl_encoder = ImageEncoder(model_path,
File "/opt/lmdeploy/lmdeploy/vl/engine.py", line 90, in __init__
self.model = load_vl_model(model_path, backend_config=backend_config)
File "/opt/lmdeploy/lmdeploy/vl/model/builder.py", line 56, in load_vl_model
return module(**kwargs)
File "/opt/lmdeploy/lmdeploy/vl/model/base.py", line 31, in __init__
self.build_model()
File "/opt/lmdeploy/lmdeploy/vl/model/qwen2.py", line 35, in build_model
check_qwen_vl_deps_install()
File "/opt/lmdeploy/lmdeploy/vl/model/qwen2.py", line 17, in check_qwen_vl_deps_install
raise ImportError(
ImportError: please install qwen_vl_utils by pip install qwen_vl_utils
The text was updated successfully, but these errors were encountered:
Checklist
Describe the bug
I'm using the docker version 0.6.1-cu12 to deploy qwen2-vl-7b local
Reproduction
docker run --runtime nvidia --gpus all
-v ~/.cache/huggingface:/root/.cache/huggingface
--env "HUGGING_FACE_HUB_TOKEN="
-p 23333:23333
--ipc=host
openmmlab/lmdeploy:latest
lmdeploy serve api_server qwen2-vl-7b
Environment
Error traceback
The text was updated successfully, but these errors were encountered: