Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't get yuna to respond #113

Open
greentree-ai opened this issue Nov 1, 2024 · 3 comments
Open

Can't get yuna to respond #113

greentree-ai opened this issue Nov 1, 2024 · 3 comments

Comments

@greentree-ai
Copy link

I am running yuna from a docker container (python:3:12) and I cannot get yuna to say anything. I downloaded the model yuna-ai-v3-atomic-q_6_k.gguf and placed it in lib/models/yuna and updated the static/config.json "yuna_default_model": "yuna-ai-v3-atomic-q_6_k"
Whenever I send a message, this is what the console says

Response content:
Response:

No errors.
My exact procedure is:

  1. Start docker container
  2. git clone
  3. pip install -r requirements.txt
  4. copy model to lib/models/yuna
  5. configure config.json (or don't, I've tried both)
  6. python index.py
  7. send any message in chat

I feel like I'm missing something so this could be my fault.

@yukiarimo
Copy link
Owner

yukiarimo commented Nov 1, 2024

  1. It’s not my problem, it’s llama-cpp-python error, something with the tokens
  2. It’s a V4 problem, so wait a little, our team currently is optimizing the model’s parameters
  3. Try KoboldCPP as a backend. Installation package and guide will be provided soon

@greentree-ai
Copy link
Author

Ok, good to know I wasn't doing anything wrong. I'll look forward to Kobold guide

@yukiarimo
Copy link
Owner

I would also try implementing the MLX soon; hopefully, the don’t have this problem (spoiler: they do)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants