-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage]: MiniCPM-Llama3-V 2.5 使用vllm 报错:AttributeError: 'list' object has no attribute 'to' #6
Comments
下面是完整报错 AttributeError Traceback (most recent call last) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/entrypoints/llm.py:156, in LLM.init(self, model, tokenizer, tokenizer_mode, skip_tokenizer_init, trust_remote_code, tensor_parallel_size, dtype, quantization, revision, tokenizer_revision, seed, gpu_memory_utilization, swap_space, cpu_offload_gb, enforce_eager, max_context_len_to_capture, max_seq_len_to_capture, disable_custom_all_reduce, **kwargs) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/engine/llm_engine.py:426, in LLMEngine.from_engine_args(cls, engine_args, usage_context, stat_loggers) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/engine/llm_engine.py:264, in LLMEngine.init(self, model_config, cache_config, parallel_config, scheduler_config, device_config, load_config, lora_config, multimodal_config, speculative_config, decoding_config, observability_config, prompt_adapter_config, executor_class, log_stats, usage_context, stat_loggers) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/engine/llm_engine.py:363, in LLMEngine._initialize_kv_caches(self) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/executor/gpu_executor.py:92, in GPUExecutor.determine_num_available_blocks(self) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/worker/worker.py:179, in Worker.determine_num_available_blocks(self) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/worker/model_runner.py:759, in GPUModelRunnerBase.profile_run(self) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/worker/model_runner.py:1096, in ModelRunner.prepare_model_input(self, seq_group_metadata_list, virtual_engine, finished_requests_ids) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/worker/model_runner.py:672, in GPUModelRunnerBase._prepare_model_input_tensors(self, seq_group_metadata_list, finished_requests_ids) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/worker/model_runner.py:444, in ModelInputForGPUBuilder.build(self) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/multimodal/base.py:87, in MultiModalInputs.batch(inputs_list, device) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/multimodal/base.py:88, in (.0) File ~/anaconda3/envs/MiniCPMV/lib/python3.10/site-packages/vllm/multimodal/base.py:54, in MultiModalInputs.try_concat(tensors, device) AttributeError: 'list' object has no attribute 'to' |
same error |
These are things I've discussed with vllm teams yesterday and we've got our PR merged into |
感谢大佬 |
Your current environment
我是 MiniCPM-Llama3-V 2.5 都不能用
运行minicpmv_example.py 报错:AttributeError: 'list' object has no attribute 'to'
环境:
How would you like to use vllm
I want to run inference of a [ MiniCPM-Llama3-V 2.5].
The text was updated successfully, but these errors were encountered: