Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xce in position 4411: invalid continuation byte #489

Open
huqiangDu opened this issue Sep 19, 2024 · 0 comments

Comments

@huqiangDu
Copy link

huqiangDu commented Sep 19, 2024

When I use alpace2flm.py to generate llama2-13b model, I met an error

Traceback (most recent call last):
  File "/home/xxx/fastllm/tools/scripts/alpaca2flm.py", line 18, in <module>
    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "<FLM_FIX_TOKEN_1>",
  File "/usr/local/lib/python3.10/dist-packages/ftllm-0.0.0.1-py3.10.egg/ftllm/torch2flm.py", line 223, in tofile
    tokenizer_data = json.load(f)
  File "/usr/lib/python3.10/json/__init__.py", line 293, in load
    return loads(fp.read(),
  File "/usr/lib/python3.10/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xce in position 4411: invalid continuation byte

and my python code is

if __name__ == "__main__":
    model_name = sys.argv[3] if len(sys.argv) >= 4 else 'meta-llama/Llama-2-13b-chat-hf'
    tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
    # `torch_dtype=torch.float16` is set by default, if it will not cause an OOM Error, you can load model in float32.
    model = LlamaForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
    conf = model.config.__dict__
    conf["model_type"] = "llama"
    dtype = sys.argv[2] if len(sys.argv) >= 3 else "float16"
    exportPath = sys.argv[1] if len(sys.argv) >= 2 else "alpaca-33b-" + dtype + ".flm"
    # add custom code here
    #torch2flm.tofile(exportPath, model, tokenizer, dtype = dtype)
    torch2flm.tofile(exportPath, model, tokenizer, pre_prompt = "<FLM_FIX_TOKEN_1>",
                     user_role = "[INST] ", bot_role = " [/INST]",
                     history_sep = " <FLM_FIX_TOKEN_2><FLM_FIX_TOKEN_1>", dtype = dtype)

How can I fix it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant