You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Kaggle has 2x Tesla T4s with 30 hours per week for free.
If you want 2x faster QLoRA / LoRA 2x finetuning + use 70% less memory with 0% accuracy degradation, try out Unsloth! https://github.com/unslothai/unsloth.
Hey MistralAI hackathon participants!
If it's helpful, I uploaded 4bit pre-quantized bitsandbytes versions of the new 32K base model from Mistral to https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit - you get 1GB less VRAM usage due to reduced fragmentation. Also the base model to https://huggingface.co/unsloth/mistral-7b-v0.2 courtesy of Alpindale.
Also, if you're short on resources:
Colab has free Tesla T4s
Kaggle has 2x Tesla T4s with 30 hours per week for free.
If you want 2x faster QLoRA / LoRA 2x finetuning + use 70% less memory with 0% accuracy degradation, try out Unsloth! https://github.com/unslothai/unsloth.
Colab for Mistral 7b v1: https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing
Colab for Mistral 7b v2: https://colab.research.google.com/drive/1Fa8QVleamfNELceNM9n7SeAGr_hT5XIn?usp=sharing
The text was updated successfully, but these errors were encountered: