-
Notifications
You must be signed in to change notification settings - Fork 10.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
converting LORA to ggml to gguf #3953
Comments
You don't need to convert from the LoRA from GGML to GGUF. I think what you may be doing wrong is trying to load the LoRA with edit: I think this should work as the base model: https://huggingface.co/TheBloke/LLaMA-13b-GGUF |
Thank you Kerfuffle, let me process your answer (I'm quite a newbie in LLM) and I will come back to you once I make progresses Thank you again for the explanation |
Hello, I am also facing the same problem. I was attempting to use a different LoRA adapter, but for now, I followed the previous conversation and downloaded two models. I put TheBloke/LLaMA-13b-GGUF into the llama.cpp/models directory and andreabac3/Fauno-Italian-LLM-13B into the llama.cpp/models/loras directory. After that, I ran the main command as follows:
However, the result was as follows (with prior output omitted for brevity):
I am running the latest code and running this on a Docker container with the Ubuntu:22.04 image. I apologize if I missed any documentation and am not using this correctly. If I could successfully use the LoRA adapter in llama.cpp, it would make a significant difference to my project. |
@yuki-tomita-127
Did you already convert the LoRA using |
Thank you for your response. I apologize for the lack of detail in my previous post. I have attempted to use the
Output:
This seems to have worked successfully.
Output:
This error occurs when I do so, which I believe is the same result that @xcottos experienced. |
@yuki-tomita-127 Oh, I'm very sorry. I meant to write So just to be clear, you'll use |
@yuki-tomita-127
In your launch command, shouldn't you change |
Their problem was I accidentally told them the wrong script, so they weren't able to produce the converted adapter at all. |
Indeed, passing the output of Also, I'd like to apologize if I've taken over the thread a bit; @xcottos was the one who started this discussion. Sorry about that. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Same for me |
Hi everybody,
I have an huggingface model (https://huggingface.co/andreabac3/Fauno-Italian-LLM-13B) that I would like to convert to gguf.
That is a LORA model and I was able to convert it in ggml using convert-lora-to-ggml.py.
Now when I try to convert it to gguf, I tried using convert-llama-ggml-to-gguf.py but the magic number of the ggml model (generated with the first conversion) has a magic number (b'algg') that is not recognised
what am I doing wrong?
Thank you
Luca
The text was updated successfully, but these errors were encountered: