Fine-tuning an instruction-tuned Mistral 7B model with instruction-code pairs, specifically with kubectl commands.
We used a dataset generated by the subproject kubeget
that scrapes the official documentation for kubectl and augments the gathered data using the openai GPT APIs.
You can find the dataset on huggingface here
The finetuning process of the base model is documented in this notebook
Target | Notebook |
---|---|
Kubectl |