Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

text_generation_webui部署的几个问题 #10

Open
yisampi opened this issue Jul 27, 2023 · 2 comments
Open

text_generation_webui部署的几个问题 #10

yisampi opened this issue Jul 27, 2023 · 2 comments

Comments

@yisampi
Copy link

yisampi commented Jul 27, 2023

1.FileNotFoundError: [Errno 2] No such file or directory: './models/visualcla_merged-7b/pytorch_model.bin'
对于合并权重的情况
需要cp visualcla/pytorch_model.bin models/visualcla_merged-7b/
不知道这样对不对

2.OSError: Can't load the configuration of './models/visualcla_merged-7b/vision_encoder'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './models/visualcla_merged-7b/vision_encoder' is the correct path to a directory containing a config.json file
对于合并权重的情况
需要cp -r ./visualcla/vision_encoder/ ./models/visualcla_merged-7b/
不知道这样对不对

3.OSError: ./models/visualcla_merged-7b does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./models/visualcla_merged-7b/main' for available files.
对于合并权重的情况
cp ./visualcla/preprocessor_config.json models/visualcla_merged-7b/
不知道这样对不对

4.KeyError: 'visual_resampler_config'
以上操作完了之后,重新运行server.py
$ python server.py --model=visualcla_merged-7b --multimodal-pipeline=visualcla-7b --chat --settings=settings-visualcla.yaml --share --load-in-8bit
2023-07-27 09:31:45 WARNING:The gradio "share link" feature uses a proprietary executable to create a reverse tunnel. Use it with care.
2023-07-27 09:31:47 INFO:Loading settings from settings-visualcla.yaml...
2023-07-27 09:31:47 INFO:Loading visualcla_merged-7b...
2023-07-27 09:38:36 WARNING:models/visualcla_merged-7b/special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated.
2023-07-27 09:38:36 INFO:Loaded the model in 408.25 seconds.

2023-07-27 09:38:36 INFO:Loading the extension "multimodal"...
2023-07-27 09:38:36 INFO:VisualCLA - Loading CLIP from ./models/visualcla_merged-7b/vision_encoder as torch.float32 on cuda:0...
2023-07-27 09:38:38 INFO:VisualCLA - Loading visual resampler from ./models/visualcla_merged-7b/ as torch.float32 on cuda:0...
Traceback (most recent call last):
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/server.py", line 1179, in
create_interface()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/server.py", line 1086, in create_interface
extensions_module.create_extensions_block()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/modules/extensions.py", line 175, in create_extensions_block
extension.ui()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/script.py", line 119, in ui
multimodal_embedder = MultimodalEmbedder(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/multimodal_embedder.py", line 27, in init
pipeline, source = load_pipeline(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipeline_loader.py", line 30, in load_pipeline
pipeline = getattr(pipeline_modules[k], 'get_pipeline')(shared.args.multimodal_pipeline, params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/pipelines.py", line 11, in get_pipeline
return VisualCLA_7B_Pipeline(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 140, in init
super().init(params)
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 30, in init
self.image_processor, self.vision_tower, self.visual_resampler, self.image_projection_layer = self._load_models()
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/extensions/multimodal/pipelines/visualcla/visualcla.py", line 47, in _load_models
visual_resampler_config = VisualResamplerConfig.from_dict(json.load(open(os.path.join(shared.settings['visualcla_merged_model'], 'config.json')))['visual_resampler_config'])
KeyError: 'visual_resampler_config'

配置文件config.json如下
more models/visualcla_merged-7b/config.json
{
"_name_or_path": "chinese-alpaca-plus-7b/",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 2048,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"pad_token_id": 0,
"rms_norm_eps": 1e-06,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.30.2",
"use_cache": true,
"vocab_size": 49954
}

请帮忙看看,谢谢

@GoGoJoestar
Copy link
Collaborator

关于1、2、3:对于合并后的权重,需要将文本端权重text_encoder目录(即你的visualcla/text_encoder目录)复制到text-generation-webui目录的models下并命名为visualcla_merged-7b,然后将tokenizer相关的四个文件复制到models/visualcla_merged-7b下,最后修改text-generation-webui的settings-visualcla.yaml中visualcla_merged_model的值为合并后权重的目录(即你的visualcla目录)。
models/visualcla_merged-7b目录下只包含文本端相关文件,不需要把vision_encodel和resampler等部分放进去。其余部分只需要在settings-visualcla.yaml中设置好目录就可以读取,看上面的报错可能是这里设置的目录不对

@yisampi
Copy link
Author

yisampi commented Jul 27, 2023

嗯 是我配置错了
visualcla_merged_model : [Path/For/Merged_Model] 之前这里路径我配置成models/visualcla_merged-7b,应该是Merged_Model的路径
谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants