Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using PeFT (Parameter efficient Fine Tuning) and the larger Google Gemma 7B model to generate a training set to customize the Gemma 2B model #21

Open
obriensystems opened this issue Mar 17, 2024 · 0 comments
Assignees
Labels

Comments

@obriensystems
Copy link
Member

obriensystems commented Mar 17, 2024

Discovery

see https://github.com/ObrienlabsDev/blog/wiki/Lab:-DLIW62596-Efficient-Large-Language-Model-(LLM)-Customization-%E2%80%90-NVidia-Deep-Learning-Institute-at-GTC24-San-Jose
https://www.mercity.ai/blog-post/fine-tuning-llms-using-peft-and-lora

@obriensystems obriensystems self-assigned this Mar 17, 2024
@obriensystems obriensystems changed the title Using PeFT and the Google Gemma 7B model to customize the Gemma 2B model Using PeFT (Parameter efficient Fine Tuning) and the larger Google Gemma 7B model to generate a training set to customize the Gemma 2B model Mar 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant