Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问多个lora模型怎么合并? #242

Open
Orangeices opened this issue Jul 18, 2023 · 0 comments
Open

请问多个lora模型怎么合并? #242

Orangeices opened this issue Jul 18, 2023 · 0 comments

Comments

@Orangeices
Copy link

使用纯C++推理

感谢之前的工作: Llama.cppAlpaca.cpp, 请注意

  • 这里的步骤应该在你训练完了lora再进行.
  • 合并后的checkpoint对于7B模型大概消耗13G磁盘空间,对于13B模型大概消耗37G, 30B和65B由于我们有限的设备条件没有测试. 注意在转换过程中会消耗很大的内存 ( 比如13B可能超过64G,但你可以通过提高swap空间解决 )
  • 另外, 7B,13B,30B,65B的checkpoint分别默认被分成1,2,4,8片 ( 这也是cpp里边固定的设置 )

update readme
1.首先,你需要将你的lora参数与原始模型合并,并将它们转换为ggml格式,用于cpp推理。

merge changes for cpp inference

bash prepare_llama_cpp.sh

update readme
( 在我们的代码中,首先将hf模型和lora转换为合并的consolidated.0x.pth,其中x对应num_shards,并将它们转换为ggml-model-f16.bin。 )

执行到这一步后

bash prepare_llama_cpp.sh

可以将hf模型和lora合并。我想请教一下,如果是多个lora模型,是否可以合并,该怎么操作?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant