You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hey all i hope you have a good day
i would like to ask a question please :
Q : Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique
i would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on OFA
hey all i hope you have a good day
i would like to ask a question please :
Q : Quantize Per-Trained model Using QLoRa or LoRa , PFET Technique
i would like to ask how can I use QLoRa or Parameter-Efficient Fine-Tuning thin a model does not register at Hugging face instead is Based on OFA
Here the repo of the model: GitHub
i am trying to Quantize the Tiny version but I don’t know if I need to use Lora in which way to Parameter-Efficient Fine-Tuning
The text was updated successfully, but these errors were encountered: