You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the 13b checkpoint and vicuna model. Followed the steps and am stuck at this error message.
Not sure what I should do. I am downloading the 7b model to see if that one works.
In the readme, it does not indicate which exact model file should be loaded. When I try the tokenizer.model, it gives an error about invalid json. When I try the config.json, I get the error below.
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
Initializing Chat
/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1727: FutureWarning: Calling LlamaTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
warnings.warn(
Traceback (most recent call last):
File "/home/ken/git-repos/MiniGPT-4/demo.py", line 65, in
model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/minigpt4.py", line 169, in from_config
model = cls(
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/minigpt4.py", line 45, in init
super().init(
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/minigpt_base.py", line 41, in init
self.llama_model, self.llama_tokenizer = self.init_llm(
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/base_model.py", line 174, in init_llm
llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model_path, use_fast=False)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1825, in from_pretrained
return cls._from_pretrained(
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1988, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 96, in init
self.sp_model.Load(vocab_file)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/sentencepiece/init.py", line 961, in Load
return self.LoadFromFile(model_file)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/sentencepiece/init.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from /home/ken/git-repos/vicuna/config.json
The text was updated successfully, but these errors were encountered:
confirmed same issue is happening with 7b model. When I change the model path to the directory, I get an error about how the location is a directory. Really stuck here. What can I do to fix it?
Nevermind. Fixed it. Issue was the checkpoint path in the config must be the full path to the file and not just the directory. Once fixed, it runs. Thanks!
I am using the 13b checkpoint and vicuna model. Followed the steps and am stuck at this error message.
Not sure what I should do. I am downloading the 7b model to see if that one works.
In the readme, it does not indicate which exact model file should be loaded. When I try the tokenizer.model, it gives an error about invalid json. When I try the config.json, I get the error below.
Any assistance is much appreciated!
(minigptv) ken@aibox:~/git-repos/MiniGPT-4$ python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
Initializing Chat
/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1727: FutureWarning: Calling LlamaTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
warnings.warn(
Traceback (most recent call last):
File "/home/ken/git-repos/MiniGPT-4/demo.py", line 65, in
model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id))
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/minigpt4.py", line 169, in from_config
model = cls(
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/minigpt4.py", line 45, in init
super().init(
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/minigpt_base.py", line 41, in init
self.llama_model, self.llama_tokenizer = self.init_llm(
File "/home/ken/git-repos/MiniGPT-4/minigpt4/models/base_model.py", line 174, in init_llm
llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model_path, use_fast=False)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1825, in from_pretrained
return cls._from_pretrained(
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1988, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 96, in init
self.sp_model.Load(vocab_file)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/sentencepiece/init.py", line 961, in Load
return self.LoadFromFile(model_file)
File "/home/ken/anaconda3/envs/minigptv/lib/python3.9/site-packages/sentencepiece/init.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: could not parse ModelProto from /home/ken/git-repos/vicuna/config.json
The text was updated successfully, but these errors were encountered: