Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
ztxz16 committed Jul 9, 2024
1 parent a93b335 commit d309e60
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 8 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ bash install.sh -DUSE_CUDA=ON # 编译GPU版本

### 运行demo程序 (python)

假设我们的模型位于"/mnt/hfmodels/Qwen/Qwen2-7B-Instruct/"目录
假设我们的模型位于"~/Qwen2-7B-Instruct/"目录

编译完成后可以使用下列demo:

Expand Down Expand Up @@ -70,10 +70,10 @@ python3 -m ftllm.server -t 16 -p ~/Qwen2-7B-Instruct/ --port 8080 --model_name q
# 进入fastllm/build-fastllm目录
# 命令行聊天程序, 支持打字机效果 (只支持Linux)
./main -p model.flm
./main -p ~/Qwen2-7B-Instruct/
# 简易webui, 使用流式输出 + 动态batch,可多路并发访问
./webui -p model.flm --port 1234
./webui -p ~/Qwen2-7B-Instruct/ --port 1234
```

Windows下的编译推荐使用Cmake GUI + Visual Studio,在图形化界面中完成。
Expand All @@ -85,7 +85,7 @@ Windows下的编译推荐使用Cmake GUI + Visual Studio,在图形化界面中
``` python
# 模型创建
from ftllm import llm
model = llm.model("model.flm")
model = llm.model("~/Qwen2-7B-Instruct/")

# 生成回复
print(model.response("你好"))
Expand Down
8 changes: 4 additions & 4 deletions README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ For compilation on other platforms, refer to the documentation:

### Running the demo program (python)

Assuming our model is located in the "/mnt/hfmodels/Qwen/Qwen2-7B-Instruct/" directory:
Assuming our model is located in the "~/Qwen2-7B-Instruct/" directory:

After compilation, you can use the following demos:

Expand Down Expand Up @@ -68,10 +68,10 @@ For architectures that cannot directly read Hugging Face models, refer to [Model
# Enter the fastllm/build-fastllm directory
# Command line chat program, supports typewriter effect (Linux only)
./main -p model.flm
./main -p ~/Qwen2-7B-Instruct/
# Simple webui, uses streaming output + dynamic batch, supports concurrent access
./webui -p model.flm --port 1234
./webui -p ~/Qwen2-7B-Instruct/ --port 1234
```

Compilation on Windows is recommended using Cmake GUI + Visual Studio, completed in the graphical interface.
Expand All @@ -83,7 +83,7 @@ For compilation issues, especially on Windows, refer to [FAQ](docs/faq.md).
``` python
# Model creation
from ftllm import llm
model = llm.model("model.flm")
model = llm.model("~/Qwen2-7B-Instruct/")

# Generate response
print(model.response("你好"))
Expand Down

0 comments on commit d309e60

Please sign in to comment.