Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLaMA-Adapter #529

Open
Bachstelze opened this issue Apr 1, 2023 · 1 comment
Open

LLaMA-Adapter #529

Bachstelze opened this issue Apr 1, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@Bachstelze
Copy link

There is a new adapter called LLaMA-Adapter, a lightweight adaption method for fine-tuning instruction-following LLaMA models fire, using 52K data provided by Stanford Alpaca.

Open source status

  • The model implementation is available in the github repo
  • The model weights are partially available: Variants of LLaMa are available, e.g. gpt4all GPTQ-for-LLaMa. The weights LLaMA-Adapter aren't available.
  • Authors of LLaMA-Adapter are @ZrrSkywalker @csuhan @lupantech
@zhuconv
Copy link

zhuconv commented Jul 30, 2024

any updates related to this enhancement? I think llama adapter is really influential with more than 5k stars and such enhancement will be very useful. 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants