You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you want to submit another line, end your input in ''.
Breakpoint 1, inferllm::ModelImp::tokenize (this=0x51e1b0, text=" Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n", bos=true) at /root/InferLLM/src/core/model_imp.cpp:83
83 std::vectorVocab::Id res;
(gdb) c
Continuing.
Breakpoint 2, inferllm::ModelImp::tokenize (this=0x51e1b0, text=" Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n", bos=true) at /root/InferLLM/src/core/model_imp.cpp:92
92 for (int i = 0; i < len; i++) {
(gdb) i b
Num Type Disp Enb Address What
1 breakpoint keep y 0x000000000040e993 in inferllm::ModelImp::tokenize(std::string const&, bool) at /root/InferLLM/src/core/model_imp.cpp:83
breakpoint already hit 1 time
2 breakpoint keep y 0x000000000040e9fc in inferllm::ModelImp::tokenize(std::string const&, bool) at /root/InferLLM/src/core/model_imp.cpp:92
breakpoint already hit 1 time
(gdb) c
Continuing.
failed to tokenize string!
I got the error like this on centos 7:
failed to tokenize string!
[root@VM-0-15-centos build]# ./llama -m chinese-alpaca-7b-q4.bin -t 2
main: seed = 1700961097
model is new , version = 1
load: n_vocab = 49954
load: n_ctx = 2048
load: n_embd = 4096
load: n_mult = 256
load: n_head = 32
load: n_layer = 32
load: n_rot = 128
load: model ftype = 2
total weight length = 4304332800
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
== Running in chat mode. ==
failed to tokenize string!
Killed
Originally posted by @Mignet in #28 (comment)
The text was updated successfully, but these errors were encountered: