diff --git a/integrations/model-training/torchtune/notebooks/Comet_and_torchtune.ipynb b/integrations/model-training/torchtune/notebooks/Comet_and_torchtune.ipynb
index 6b9d5a2..18b2d75 100644
--- a/integrations/model-training/torchtune/notebooks/Comet_and_torchtune.ipynb
+++ b/integrations/model-training/torchtune/notebooks/Comet_and_torchtune.ipynb
@@ -1,579 +1,1337 @@
{
- "nbformat": 4,
- "nbformat_minor": 0,
- "metadata": {
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QGh1ELq9353o"
+ },
+ "source": [
+ "\n",
+ " \n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ZwFdmc4rzjwM"
+ },
+ "source": [
+ "# Comet and torchtune\n",
+ "\n",
+ "[Comet](https://www.comet.com/site/?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune) is an MLOps platform designed to help data scientists and teams build better models faster! Comet provides tooling to track, explain, manage, and monitor your models in a single place! It works with Jupyter notebooks and scripts and-- most importantly--it's 100% free to get started!\n",
+ "\n",
+ "[torchtune](https://github.com/pytorch/torchtune) is a PyTorch-native library for easily authoring, fine-tuning and experimenting with LLMs.\n",
+ "\n",
+ "Instrument your torchtune training runs with Comet to start managing experiments with efficiency, reproducibility, and collaboration in mind.\n",
+ "\n",
+ "Find more information about [our integration with torchtune here](https://www.comet.com/docs/v2/integrations/third-party-tools/torchtune?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune) or [learn about our other integrations here](https://www.comet.com/docs/v2/integrations?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SE2dLI22Muuy"
+ },
+ "source": [
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ "🌟 Can't wait? For a preview for what's to come, check out the completed Comet project [here](https://www.comet.com/examples/comet-examples-torchtune-mistral7b?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
"colab": {
- "provenance": [],
- "machine_shape": "hm",
- "gpuType": "A100"
+ "base_uri": "https://localhost:8080/"
},
- "kernelspec": {
- "name": "python3",
- "display_name": "Python 3"
+ "id": "PM3nbzHr0BwP",
+ "outputId": "eed9156a-2ed4-4394-9c02-36682e18a81e"
+ },
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/pytorch/torchtune/"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2ZyMqeuL4FUi"
+ },
+ "source": [
+ "## Install dependencies"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
},
- "language_info": {
- "name": "python"
+ "id": "Vq4bUdro0F-Y",
+ "outputId": "4858ebfc-9a4c-44ed-eee8-5e8f7f5b1acd"
+ },
+ "outputs": [],
+ "source": [
+ "%pip install \"./torchtune[dev]\" comet_ml"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "LEcuScqX4Klr"
+ },
+ "source": [
+ "## Login to Comet\n",
+ "For CLI, use `comet login`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
},
- "accelerator": "GPU"
+ "id": "EssuzOD10KcM",
+ "outputId": "d0c4f201-8caf-4dc1-c753-79972fd98671"
+ },
+ "outputs": [],
+ "source": [
+ "import comet_ml\n",
+ "\n",
+ "comet_ml.login()"
+ ]
},
- "cells": [
- {
- "cell_type": "markdown",
- "source": [
- "\n",
- " \n",
- ""
- ],
- "metadata": {
- "id": "QGh1ELq9353o"
- }
- },
- {
- "cell_type": "markdown",
- "source": [
- "# Comet and torchtune\n",
- "\n",
- "[Comet](https://www.comet.com/site/?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune) is an MLOps platform designed to help data scientists and teams build better models faster! Comet provides tooling to track, explain, manage, and monitor your models in a single place! It works with Jupyter notebooks and scripts and-- most importantly--it's 100% free to get started!\n",
- "\n",
- "[torchtune](https://github.com/pytorch/torchtune) is a PyTorch-native library for easily authoring, fine-tuning and experimenting with LLMs.\n",
- "\n",
- "Instrument your torchtune training runs with Comet to start managing experiments with efficiency, reproducibility, and collaboration in mind.\n",
- "\n",
- "Find more information about [our integration with torchtune here](https://www.comet.com/docs/v2/integrations/third-party-tools/torchtune?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune) or [learn about our other integrations here](https://www.comet.com/docs/v2/integrations?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune)."
- ],
- "metadata": {
- "id": "ZwFdmc4rzjwM"
- }
- },
- {
- "cell_type": "markdown",
- "source": [
- "\n",
- "\n",
- " \n",
- "\n",
- "\n",
- "\n",
- "🌟 Can't wait? For a preview for what's to come, check out the completed Comet project [here](https://www.comet.com/examples/comet-examples-torchtune-mistral7b?utm_medium=colab&utm_source=comet-examples&utm_campaign=torchtune)."
- ],
- "metadata": {
- "id": "SE2dLI22Muuy"
- }
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "PM3nbzHr0BwP",
- "outputId": "74a6ac85-1828-4f37-aa04-806f3955db2e"
- },
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Cloning into 'torchtune'...\n",
- "remote: Enumerating objects: 18300, done.\u001b[K\n",
- "remote: Counting objects: 100% (3065/3065), done.\u001b[K\n",
- "remote: Compressing objects: 100% (1182/1182), done.\u001b[K\n",
- "remote: Total 18300 (delta 2318), reused 2386 (delta 1873), pack-reused 15235 (from 1)\u001b[K\n",
- "Receiving objects: 100% (18300/18300), 16.47 MiB | 17.59 MiB/s, done.\n",
- "Resolving deltas: 100% (14121/14121), done.\n"
- ]
- }
- ],
- "source": [
- "!git clone https://github.com/pytorch/torchtune/"
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Install dependencies"
- ],
- "metadata": {
- "id": "2ZyMqeuL4FUi"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "!pip install -qqq comet_ml"
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "Z2tHL_oa0Ejs",
- "outputId": "a7ecc7d3-2e80-4160-c4a8-63b7d81d590a"
- },
- "execution_count": 2,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/687.3 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m687.3/687.3 kB\u001b[0m \u001b[31m36.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/979.1 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m979.1/979.1 kB\u001b[0m \u001b[31m53.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/3.3 MB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[91m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[91m╸\u001b[0m \u001b[32m3.3/3.3 MB\u001b[0m \u001b[31m210.9 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.3/3.3 MB\u001b[0m \u001b[31m91.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/54.5 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m54.5/54.5 kB\u001b[0m \u001b[31m4.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/309.1 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m309.1/309.1 kB\u001b[0m \u001b[31m25.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/137.9 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m137.9/137.9 kB\u001b[0m \u001b[31m12.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h"
- ]
- }
- ]
- },
- {
- "cell_type": "code",
- "source": [
- "!cd torchtune/ && python -m pip install -qqq \".[dev]\""
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "Vq4bUdro0F-Y",
- "outputId": "b58ea797-ca80-4ada-be2d-ba998e5d8334"
- },
- "execution_count": 3,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- " Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
- " Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
- " Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m117.0/117.0 kB\u001b[0m \u001b[31m10.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m409.3/409.3 kB\u001b[0m \u001b[31m19.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m323.6/323.6 kB\u001b[0m \u001b[31m16.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m137.5/137.5 MB\u001b[0m \u001b[31m16.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.4/75.4 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m527.3/527.3 kB\u001b[0m \u001b[31m35.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m79.5/79.5 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m204.6/204.6 kB\u001b[0m \u001b[31m17.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.1/1.1 MB\u001b[0m \u001b[31m58.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m9.4/9.4 MB\u001b[0m \u001b[31m117.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m234.7/234.7 kB\u001b[0m \u001b[31m20.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m116.3/116.3 kB\u001b[0m \u001b[31m11.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m207.3/207.3 kB\u001b[0m \u001b[31m18.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m99.0/99.0 kB\u001b[0m \u001b[31m8.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m39.9/39.9 MB\u001b[0m \u001b[31m54.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.1/2.1 MB\u001b[0m \u001b[31m76.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.7/5.7 MB\u001b[0m \u001b[31m112.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m12.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m194.1/194.1 kB\u001b[0m \u001b[31m17.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m468.9/468.9 kB\u001b[0m \u001b[31m29.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.7/62.7 kB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
- "\u001b[?25h Building wheel for torchtune (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
- " Building wheel for antlr4-python3-runtime (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
- "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
- "cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.\n",
- "ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.\u001b[0m\u001b[31m\n",
- "\u001b[0m"
- ]
- }
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Login to Comet\n",
- "For CLI, use `comet login `"
- ],
- "metadata": {
- "id": "LEcuScqX4Klr"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "import comet_ml\n",
- "\n",
- "comet_ml.login(api_key=COMET_API_KEY)"
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "EssuzOD10KcM",
- "outputId": "065aafb2-18eb-472e-c76c-23d63c481229"
- },
- "execution_count": 4,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stderr",
- "text": [
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Valid Comet API Key saved in /root/.comet.config (set COMET_CONFIG to change where it is saved).\n"
- ]
- }
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Download the model"
- ],
- "metadata": {
- "id": "tZjS8Zt34jWL"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "!tune download mistralai/Mistral-7B-v0.1 \\\n",
- " --output-dir /tmp/Mistral-7B-v0.1/ \\\n",
- " --hf-token HF_TOKEN"
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "PaWPU4gA0ST4",
- "outputId": "3cd21f75-499b-4c07-aeb8-e86330bf54e4"
- },
- "execution_count": 9,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Ignoring files matching the following patterns: *.safetensors\n",
- "Fetching 12 files: 100% 12/12 [00:00<00:00, 2613.68it/s]\n",
- "Successfully downloaded model repo and wrote to the following locations:\n",
- "/tmp/Mistral-7B-v0.1/pytorch_model.bin.index.json\n",
- "/tmp/Mistral-7B-v0.1/config.json\n",
- "/tmp/Mistral-7B-v0.1/tokenizer_config.json\n",
- "/tmp/Mistral-7B-v0.1/special_tokens_map.json\n",
- "/tmp/Mistral-7B-v0.1/.gitattributes\n",
- "/tmp/Mistral-7B-v0.1/adapter_model.bin\n",
- "/tmp/Mistral-7B-v0.1/README.md\n",
- "/tmp/Mistral-7B-v0.1/.huggingface\n",
- "/tmp/Mistral-7B-v0.1/generation_config.json\n",
- "/tmp/Mistral-7B-v0.1/hf_model_0001_0.pt\n",
- "/tmp/Mistral-7B-v0.1/adapter_config.json\n",
- "/tmp/Mistral-7B-v0.1/torchtune_config.yaml\n",
- "/tmp/Mistral-7B-v0.1/tokenizer.json\n",
- "/tmp/Mistral-7B-v0.1/hf_model_0002_0.pt\n",
- "/tmp/Mistral-7B-v0.1/pytorch_model-00001-of-00002.bin\n",
- "/tmp/Mistral-7B-v0.1/pytorch_model-00002-of-00002.bin\n",
- "/tmp/Mistral-7B-v0.1/adapter_0.pt\n",
- "/tmp/Mistral-7B-v0.1/tokenizer.model\n",
- "/tmp/Mistral-7B-v0.1/model.safetensors.index.json\n"
- ]
- }
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Create the config file"
- ],
- "metadata": {
- "id": "a52jWY0V4to-"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "%%writefile mistral_comet_lora.yaml\n",
- "tokenizer:\n",
- " _component_: torchtune.models.mistral.mistral_tokenizer\n",
- " path: /tmp/Mistral-7B-v0.1/tokenizer.model\n",
- "\n",
- "# Dataset\n",
- "dataset:\n",
- " _component_: torchtune.datasets.alpaca_cleaned_dataset\n",
- " train_on_input: True\n",
- "seed: null\n",
- "shuffle: True\n",
- "\n",
- "# Model Arguments\n",
- "model:\n",
- " _component_: torchtune.models.mistral.lora_mistral_7b\n",
- " lora_attn_modules: ['q_proj', 'k_proj', 'v_proj']\n",
- " apply_lora_to_mlp: True\n",
- " apply_lora_to_output: True\n",
- " lora_rank: 64\n",
- " lora_alpha: 16\n",
- "\n",
- "checkpointer:\n",
- " _component_: torchtune.utils.FullModelHFCheckpointer\n",
- " checkpoint_dir: /tmp/Mistral-7B-v0.1\n",
- " checkpoint_files: [\n",
- " pytorch_model-00001-of-00002.bin,\n",
- " pytorch_model-00002-of-00002.bin\n",
- " ]\n",
- " recipe_checkpoint: null\n",
- " output_dir: /tmp/Mistral-7B-v0.1\n",
- " model_type: MISTRAL\n",
- "resume_from_checkpoint: False\n",
- "\n",
- "optimizer:\n",
- " _component_: torch.optim.AdamW\n",
- " lr: 2e-5\n",
- "\n",
- "lr_scheduler:\n",
- " _component_: torchtune.modules.get_cosine_schedule_with_warmup\n",
- " num_warmup_steps: 100\n",
- "\n",
- "loss:\n",
- " _component_: torch.nn.CrossEntropyLoss\n",
- "\n",
- "# Fine-tuning arguments\n",
- "batch_size: 4\n",
- "epochs: 1\n",
- "max_steps_per_epoch: 100\n",
- "gradient_accumulation_steps: 2\n",
- "compile: False\n",
- "\n",
- "# Training env\n",
- "device: cuda\n",
- "\n",
- "# Memory management\n",
- "enable_activation_checkpointing: True\n",
- "\n",
- "# Reduced precision\n",
- "dtype: bf16\n",
- "############################### Enable Comet ###################################\n",
- "################################################################################\n",
- "# Logging\n",
- "# enable logging to the built-in CometLogger\n",
- "metric_logger:\n",
- " _component_: torchtune.utils.metric_logging.CometLogger\n",
- " # the Comet project to log to\n",
- " project: comet-examples-torchtune-mistral7b\n",
- " workspace: examples\n",
- " experiment_name: mistral7b-alpaca-cleaned\n",
- "################################################################################\n",
- "################################################################################\n",
- "output_dir: /tmp/Mistral-7B-v0.1\n",
- "log_peak_memory_stats: True\n",
- "\n",
- "# Profiler (disabled)\n",
- "profiler:\n",
- " _component_: torchtune.utils.setup_torch_profiler\n",
- " enabled: False"
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "LrioSBcP0WG7",
- "outputId": "301d4e96-febf-436f-c97e-ed29a864c521"
- },
- "execution_count": 12,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "Overwriting mistral_comet_lora.yaml\n"
- ]
- }
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Train the model"
- ],
- "metadata": {
- "id": "rJxXDbrs4xFy"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "!tune run lora_finetune_single_device --config mistral_comet_lora.yaml"
- ],
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "fvZLiofe0Z2X",
- "outputId": "2c3acf42-e1f9-4f25-aeea-9cdd4ef6837c"
- },
- "execution_count": 13,
- "outputs": [
- {
- "output_type": "stream",
- "name": "stdout",
- "text": [
- "INFO:torchtune.utils.logging:Running LoRAFinetuneRecipeSingleDevice with resolved config:\n",
- "\n",
- "batch_size: 4\n",
- "checkpointer:\n",
- " _component_: torchtune.utils.FullModelHFCheckpointer\n",
- " checkpoint_dir: /tmp/Mistral-7B-v0.1\n",
- " checkpoint_files:\n",
- " - pytorch_model-00001-of-00002.bin\n",
- " - pytorch_model-00002-of-00002.bin\n",
- " model_type: MISTRAL\n",
- " output_dir: /tmp/Mistral-7B-v0.1\n",
- " recipe_checkpoint: null\n",
- "compile: false\n",
- "dataset:\n",
- " _component_: torchtune.datasets.alpaca_cleaned_dataset\n",
- " train_on_input: true\n",
- "device: cuda\n",
- "dtype: bf16\n",
- "enable_activation_checkpointing: true\n",
- "epochs: 1\n",
- "gradient_accumulation_steps: 2\n",
- "log_peak_memory_stats: true\n",
- "loss:\n",
- " _component_: torch.nn.CrossEntropyLoss\n",
- "lr_scheduler:\n",
- " _component_: torchtune.modules.get_cosine_schedule_with_warmup\n",
- " num_warmup_steps: 100\n",
- "max_steps_per_epoch: 100\n",
- "metric_logger:\n",
- " _component_: torchtune.utils.metric_logging.CometLogger\n",
- " experiment_name: mistral7b-alpaca-cleaned\n",
- " project: comet-examples-torchtune-mistral7b\n",
- " workspace: examples\n",
- "model:\n",
- " _component_: torchtune.models.mistral.lora_mistral_7b\n",
- " apply_lora_to_mlp: true\n",
- " apply_lora_to_output: true\n",
- " lora_alpha: 16\n",
- " lora_attn_modules:\n",
- " - q_proj\n",
- " - k_proj\n",
- " - v_proj\n",
- " lora_rank: 64\n",
- "optimizer:\n",
- " _component_: torch.optim.AdamW\n",
- " lr: 2.0e-05\n",
- "output_dir: /tmp/Mistral-7B-v0.1\n",
- "profiler:\n",
- " _component_: torchtune.utils.setup_torch_profiler\n",
- " enabled: false\n",
- "resume_from_checkpoint: false\n",
- "seed: null\n",
- "shuffle: true\n",
- "tokenizer:\n",
- " _component_: torchtune.models.mistral.mistral_tokenizer\n",
- " path: /tmp/Mistral-7B-v0.1/tokenizer.model\n",
- "\n",
- "DEBUG:torchtune.utils.logging:Setting manual seed to local seed 4022048605. Local seed is seed + rank = 4022048605 + 0\n",
- "\u001b[1;38;5;214mCOMET WARNING:\u001b[0m To get all data logged automatically, import comet_ml before the following modules: torch.\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Experiment is live on comet.com \u001b[38;5;39mhttps://www.comet.com/examples/comet-examples-torchtune-mistral7b/afe3c112379042168635e69efa25c7e5\u001b[0m\n",
- "\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Couldn't find a Git repository in '/content' nor in any parent directory. Set `COMET_GIT_DIRECTORY` if your Git Repository is elsewhere.\n",
- "INFO:torchtune.utils.logging:Model is initialized with precision torch.bfloat16.\n",
- "INFO:torchtune.utils.logging:Memory stats after model init:\n",
- "\tGPU peak memory allocation: 14.72 GiB\n",
- "\tGPU peak memory reserved: 14.74 GiB\n",
- "\tGPU peak memory active: 14.72 GiB\n",
- "INFO:torchtune.utils.logging:Tokenizer is initialized from file.\n",
- "INFO:torchtune.utils.logging:Optimizer and loss are initialized.\n",
- "INFO:torchtune.utils.logging:Loss is initialized.\n",
- "Downloading readme: 100%|██████████| 11.6k/11.6k [00:00<00:00, 51.1kB/s]\n",
- "Downloading data: 100%|██████████| 44.3M/44.3M [00:00<00:00, 108MB/s] \n",
- "Generating train split: 100%|██████████| 51760/51760 [00:00<00:00, 116724.68 examples/s]\n",
- "INFO:torchtune.utils.logging:Dataset and Sampler are initialized.\n",
- "INFO:torchtune.utils.logging:Learning rate scheduler is initialized.\n",
- "WARNING:torchtune.utils.logging: Profiling disabled.\n",
- "INFO:torchtune.utils.logging: Profiler config after instantiation: {'enabled': False}\n",
- " 0%| | 0/100 [00:00, ?it/s]/usr/local/lib/python3.10/dist-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead.\n",
- " with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined]\n",
- "1|100|Loss: 0.7635065913200378: 100%|██████████| 100/100 [02:01<00:00, 1.13s/it]INFO:torchtune.utils.logging:Model checkpoint of size 9.94 GB saved to /tmp/Mistral-7B-v0.1/hf_model_0001_0.pt\n",
- "INFO:torchtune.utils.logging:Model checkpoint of size 4.54 GB saved to /tmp/Mistral-7B-v0.1/hf_model_0002_0.pt\n",
- "INFO:torchtune.utils.logging:Adapter checkpoint of size 0.31 GB saved to /tmp/Mistral-7B-v0.1/adapter_0.pt\n",
- "INFO:torchtune.utils.logging:Adapter checkpoint of size 0.31 GB saved to /tmp/Mistral-7B-v0.1/adapter_model.bin\n",
- "INFO:torchtune.utils.logging:Adapter checkpoint of size 0.00 GB saved to /tmp/Mistral-7B-v0.1/adapter_config.json\n",
- "INFO:torchtune.utils.logging:Saving final epoch checkpoint.\n",
- "INFO:torchtune.utils.logging:The full model checkpoint, including all weights and configurations, has been saved successfully.You can now use this checkpoint for further training or inference.\n",
- "1|100|Loss: 0.7635065913200378: 100%|██████████| 100/100 [03:08<00:00, 1.89s/it]\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m ---------------------------------------------------------------------------------------\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Comet.ml Experiment Summary\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m ---------------------------------------------------------------------------------------\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Data:\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m display_summary_level : 1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m name : mistral7b-alpaca-cleaned\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m url : \u001b[38;5;39mhttps://www.comet.com/examples/comet-examples-torchtune-mistral7b/afe3c112379042168635e69efa25c7e5\u001b[0m\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Metrics [count] (min, max):\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m loss [100] : (0.5963497161865234, 1.8327248096466064)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m lr [100] : (2.0000000000000002e-07, 2e-05)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m peak_memory_active [100] : (15.467722415924072, 16.85972785949707)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m peak_memory_alloc [100] : (15.467722415924072, 16.85972785949707)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m peak_memory_reserved [100] : (16.056640625, 22.583984375)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m tokens_per_second_per_gpu [100] : (1252.3178295162577, 2893.796604217567)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Others:\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Name : mistral7b-alpaca-cleaned\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m hasNestedParams : True\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Parameters:\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m batch_size : 4\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m checkpointer|_component_ : torchtune.utils.FullModelHFCheckpointer\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m checkpointer|checkpoint_dir : /tmp/Mistral-7B-v0.1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m checkpointer|checkpoint_files : ['pytorch_model-00001-of-00002.bin', 'pytorch_model-00002-of-00002.bin']\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m checkpointer|model_type : MISTRAL\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m checkpointer|output_dir : /tmp/Mistral-7B-v0.1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m checkpointer|recipe_checkpoint : None\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m compile : False\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m dataset|_component_ : torchtune.datasets.alpaca_cleaned_dataset\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m dataset|train_on_input : True\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m device : cuda\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m dtype : bf16\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m enable_activation_checkpointing : True\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m epochs : 1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m gradient_accumulation_steps : 2\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m log_peak_memory_stats : True\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m loss|_component_ : torch.nn.CrossEntropyLoss\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m lr_scheduler|_component_ : torchtune.modules.get_cosine_schedule_with_warmup\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m lr_scheduler|num_warmup_steps : 100\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m max_steps_per_epoch : 100\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m metric_logger|_component_ : torchtune.utils.metric_logging.CometLogger\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m metric_logger|experiment_name : mistral7b-alpaca-cleaned\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m metric_logger|project : comet-examples-torchtune-mistral7b\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m metric_logger|workspace : examples\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m model|_component_ : torchtune.models.mistral.lora_mistral_7b\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m model|apply_lora_to_mlp : True\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m model|apply_lora_to_output : True\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m model|lora_alpha : 16\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m model|lora_attn_modules : ['q_proj', 'k_proj', 'v_proj']\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m model|lora_rank : 64\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m optimizer|_component_ : torch.optim.AdamW\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m optimizer|lr : 2e-05\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m output_dir : /tmp/Mistral-7B-v0.1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m profiler|_component_ : torchtune.utils.setup_torch_profiler\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m profiler|enabled : False\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m resume_from_checkpoint : False\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m seed : None\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m shuffle : True\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m tokenizer|_component_ : torchtune.models.mistral.mistral_tokenizer\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m tokenizer|path : /tmp/Mistral-7B-v0.1/tokenizer.model\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Uploads:\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m asset : 1 (1.41 KB)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m environment details : 1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m filename : 1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m installed packages : 1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m os packages : 1\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m source_code : 2 (46.14 KB)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m \n",
- "\u001b[1;38;5;214mCOMET WARNING:\u001b[0m To get all data logged automatically, import comet_ml before the following modules: torch.\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Please wait for metadata to finish uploading (timeout is 3600 seconds)\n",
- "\u001b[1;38;5;39mCOMET INFO:\u001b[0m Uploading 96 metrics, params and output messages\n"
- ]
- }
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "🌟 Check out the [final public project here](https://www.comet.com/examples/comet-examples-torchtune-mistral7b)!"
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tZjS8Zt34jWL"
+ },
+ "source": [
+ "## Download the model"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "IouDAk1FyiU7"
+ },
+ "source": [
+ "You need to login to HF to download that model:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 145,
+ "referenced_widgets": [
+ "f1daee1cb79345d5ae3d17c5a05c53fc",
+ "25f101b15513458199fbafc80fbbffb7",
+ "460a4aa60c1b476f8c52840236ef6c75",
+ "e8c889cbe34c4302834294d5bc32ef9f",
+ "fa50fe5baa1849deb810279aaa6bc328",
+ "4e1da00bc2644367b9a686c2ae526596",
+ "2fe1d18543ec48a982ae760dfb2f72b6",
+ "25a489ec736649f7ac70e7809150ba40",
+ "5ca47283ef9a4f7ea6259c9cb53b62ea",
+ "c0e75c4c9973464c8e86007f1d39c9f7",
+ "00867cbdb14b451ebb9d6c4ca2e10d66",
+ "8d106b78dab5441f9cf1e6bd3f7bdde0",
+ "57fcb83a2d754d77bbfa7365a3d9ba48",
+ "9228744c40e848f0a068004cb793053e",
+ "a7ab20d37d2e4a739c4bc35a92057e9e",
+ "0439c3fe41074d63902bc4b9f29a64c2",
+ "d240c85832074298a94c1a1479809216",
+ "084762ff283f4090ba399cc48997adea",
+ "c431c24c29bf49668298ecded9f89dde",
+ "546dfba48f8c4dc09b6f236fdd98f9e6",
+ "cf87b2a7a3cf4c9f9c3ccd5fd9b1f669",
+ "044534d6450b47c98c0fb270ef46000e",
+ "49d3ecb4668244ea98da8313f6347ed9",
+ "c6f120038d844558826bc4db9640dd77",
+ "c69ea844b67e4b38b35ff533d67e68ec",
+ "270340b8769744c8a027e79539838cd7",
+ "73ba358ba797484ea29299ae50611bc2",
+ "06bf71e3b45c451ca5841672368a1172",
+ "db85bc56477c4350a6d39c79610373e6",
+ "aa9be702a494436c92d6aac8cf051df6",
+ "90c0c3d0956540dd9ff04c9db7cba781",
+ "2cd5cd873ee74f5c92c7ad5eeee80ad2"
+ ]
+ },
+ "id": "d3RHkE7syqSP",
+ "outputId": "639ebf8d-6b9a-49b1-af29-b609b1c68fdc"
+ },
+ "outputs": [],
+ "source": [
+ "from huggingface_hub import notebook_login\n",
+ "\n",
+ "notebook_login()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "PaWPU4gA0ST4",
+ "outputId": "d162e2e7-19ee-4382-ad9c-dba31981fb9f"
+ },
+ "outputs": [],
+ "source": [
+ "!tune download mistralai/Mistral-7B-v0.1 \\\n",
+ " --output-dir /tmp/Mistral-7B-v0.1/"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "a52jWY0V4to-"
+ },
+ "source": [
+ "## Create the config file"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Qy4WWjrOzSBY"
+ },
+ "source": [
+ "You are now ready to fine-tune Mistral 7B model and logs your results to Comet. To log data to Comet, the important part is:\n",
+ "\n",
+ "```yaml\n",
+ "metric_logger:\n",
+ " _component_: torchtune.utils.metric_logging.CometLogger\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "LrioSBcP0WG7",
+ "outputId": "4fc9f256-1258-4e4b-c7a8-92f9380f1153"
+ },
+ "outputs": [],
+ "source": [
+ "%%writefile mistral_comet_lora.yaml\n",
+ "tokenizer:\n",
+ " _component_: torchtune.models.mistral.mistral_tokenizer\n",
+ " path: /tmp/Mistral-7B-v0.1/tokenizer.model\n",
+ "\n",
+ "# Dataset\n",
+ "dataset:\n",
+ " _component_: torchtune.datasets.alpaca_cleaned_dataset\n",
+ " train_on_input: True\n",
+ "seed: null\n",
+ "shuffle: True\n",
+ "\n",
+ "# Model Arguments\n",
+ "model:\n",
+ " _component_: torchtune.models.mistral.lora_mistral_7b\n",
+ " lora_attn_modules: ['q_proj', 'k_proj', 'v_proj']\n",
+ " apply_lora_to_mlp: True\n",
+ " apply_lora_to_output: True\n",
+ " lora_rank: 64\n",
+ " lora_alpha: 16\n",
+ "\n",
+ "checkpointer:\n",
+ " _component_: torchtune.training.FullModelHFCheckpointer\n",
+ " checkpoint_dir: /tmp/Mistral-7B-v0.1\n",
+ " checkpoint_files: [\n",
+ " pytorch_model-00001-of-00002.bin,\n",
+ " pytorch_model-00002-of-00002.bin\n",
+ " ]\n",
+ " recipe_checkpoint: null\n",
+ " output_dir: /tmp/Mistral-7B-v0.1\n",
+ " model_type: MISTRAL\n",
+ "resume_from_checkpoint: False\n",
+ "\n",
+ "optimizer:\n",
+ " _component_: torch.optim.AdamW\n",
+ " lr: 2e-5\n",
+ "\n",
+ "lr_scheduler:\n",
+ " _component_: torchtune.modules.get_cosine_schedule_with_warmup\n",
+ " num_warmup_steps: 100\n",
+ "\n",
+ "loss:\n",
+ " _component_: torch.nn.CrossEntropyLoss\n",
+ "\n",
+ "# Fine-tuning arguments\n",
+ "batch_size: 4\n",
+ "epochs: 1\n",
+ "max_steps_per_epoch: 100\n",
+ "gradient_accumulation_steps: 2\n",
+ "compile: False\n",
+ "\n",
+ "# Training env\n",
+ "device: cuda\n",
+ "\n",
+ "# Memory management\n",
+ "enable_activation_checkpointing: True\n",
+ "\n",
+ "# Reduced precision\n",
+ "dtype: bf16\n",
+ "############################### Enable Comet ###################################\n",
+ "################################################################################\n",
+ "# Logging\n",
+ "# enable logging to the built-in CometLogger\n",
+ "metric_logger:\n",
+ " _component_: torchtune.training.metric_logging.CometLogger\n",
+ " # the Comet project to log to\n",
+ " project: comet-examples-torchtune-mistral7b\n",
+ " experiment_name: mistral7b-alpaca-cleaned\n",
+ "################################################################################\n",
+ "################################################################################\n",
+ "output_dir: /tmp/Mistral-7B-v0.1\n",
+ "log_peak_memory_stats: True\n",
+ "\n",
+ "# Profiler (disabled)\n",
+ "profiler:\n",
+ " _component_: torchtune.training.setup_torch_profiler\n",
+ " enabled: False"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rJxXDbrs4xFy"
+ },
+ "source": [
+ "## Train the model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "fvZLiofe0Z2X",
+ "outputId": "b7a3a68a-f316-4c2e-c8fd-71671d493a7f"
+ },
+ "outputs": [],
+ "source": [
+ "!tune run lora_finetune_single_device --config mistral_comet_lora.yaml"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NNsfeCTD5KvD"
+ },
+ "source": [
+ "🌟 Check out the [final public project here](https://www.comet.com/examples/comet-examples-torchtune-mistral7b)!"
+ ]
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "gpuType": "A100",
+ "machine_shape": "hm",
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ },
+ "widgets": {
+ "application/vnd.jupyter.widget-state+json": {
+ "00867cbdb14b451ebb9d6c4ca2e10d66": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "0439c3fe41074d63902bc4b9f29a64c2": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "044534d6450b47c98c0fb270ef46000e": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "LabelModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "LabelModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "LabelView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_73ba358ba797484ea29299ae50611bc2",
+ "placeholder": "",
+ "style": "IPY_MODEL_06bf71e3b45c451ca5841672368a1172",
+ "value": "Your token has been saved in your configured git credential helpers (store)."
+ }
+ },
+ "06bf71e3b45c451ca5841672368a1172": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "084762ff283f4090ba399cc48997adea": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "LabelModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "LabelModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "LabelView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_c431c24c29bf49668298ecded9f89dde",
+ "placeholder": "",
+ "style": "IPY_MODEL_546dfba48f8c4dc09b6f236fdd98f9e6",
+ "value": "Connecting..."
+ }
+ },
+ "25a489ec736649f7ac70e7809150ba40": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "25f101b15513458199fbafc80fbbffb7": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "HTMLModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "HTMLModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "HTMLView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_25a489ec736649f7ac70e7809150ba40",
+ "placeholder": "",
+ "style": "IPY_MODEL_5ca47283ef9a4f7ea6259c9cb53b62ea",
+ "value": "
Copy a token from your Hugging Face\ntokens page and paste it below.
Immediately click login after copying\nyour token or it might be stored in plain text in this notebook file. "
+ }
+ },
+ "270340b8769744c8a027e79539838cd7": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "2cd5cd873ee74f5c92c7ad5eeee80ad2": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "2fe1d18543ec48a982ae760dfb2f72b6": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": "center",
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": "flex",
+ "flex": null,
+ "flex_flow": "column",
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": "50%"
+ }
+ },
+ "460a4aa60c1b476f8c52840236ef6c75": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "PasswordModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "PasswordModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "PasswordView",
+ "continuous_update": true,
+ "description": "Token:",
+ "description_tooltip": null,
+ "disabled": false,
+ "layout": "IPY_MODEL_c0e75c4c9973464c8e86007f1d39c9f7",
+ "placeholder": "",
+ "style": "IPY_MODEL_00867cbdb14b451ebb9d6c4ca2e10d66",
+ "value": ""
+ }
+ },
+ "49d3ecb4668244ea98da8313f6347ed9": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "LabelModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "LabelModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "LabelView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_db85bc56477c4350a6d39c79610373e6",
+ "placeholder": "",
+ "style": "IPY_MODEL_aa9be702a494436c92d6aac8cf051df6",
+ "value": "Your token has been saved to /root/.cache/huggingface/token"
+ }
+ },
+ "4e1da00bc2644367b9a686c2ae526596": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "HTMLModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "HTMLModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "HTMLView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_0439c3fe41074d63902bc4b9f29a64c2",
+ "placeholder": "",
+ "style": "IPY_MODEL_d240c85832074298a94c1a1479809216",
+ "value": "\nPro Tip: If you don't already have one, you can create a dedicated\n'notebooks' token with 'write' access, that you can then easily reuse for all\nnotebooks. "
+ }
+ },
+ "546dfba48f8c4dc09b6f236fdd98f9e6": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "57fcb83a2d754d77bbfa7365a3d9ba48": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "5ca47283ef9a4f7ea6259c9cb53b62ea": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "73ba358ba797484ea29299ae50611bc2": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "8d106b78dab5441f9cf1e6bd3f7bdde0": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "90c0c3d0956540dd9ff04c9db7cba781": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "9228744c40e848f0a068004cb793053e": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "a7ab20d37d2e4a739c4bc35a92057e9e": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "ButtonStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "ButtonStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "button_color": null,
+ "font_weight": ""
+ }
+ },
+ "aa9be702a494436c92d6aac8cf051df6": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "c0e75c4c9973464c8e86007f1d39c9f7": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "c431c24c29bf49668298ecded9f89dde": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "c69ea844b67e4b38b35ff533d67e68ec": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "c6f120038d844558826bc4db9640dd77": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "LabelModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "LabelModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "LabelView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_90c0c3d0956540dd9ff04c9db7cba781",
+ "placeholder": "",
+ "style": "IPY_MODEL_2cd5cd873ee74f5c92c7ad5eeee80ad2",
+ "value": "Login successful"
+ }
+ },
+ "cf87b2a7a3cf4c9f9c3ccd5fd9b1f669": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "LabelModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "LabelModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "LabelView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_c69ea844b67e4b38b35ff533d67e68ec",
+ "placeholder": "",
+ "style": "IPY_MODEL_270340b8769744c8a027e79539838cd7",
+ "value": "Token is valid (permission: read)."
+ }
+ },
+ "d240c85832074298a94c1a1479809216": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "db85bc56477c4350a6d39c79610373e6": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "e8c889cbe34c4302834294d5bc32ef9f": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "CheckboxModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "CheckboxModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "CheckboxView",
+ "description": "Add token as git credential?",
+ "description_tooltip": null,
+ "disabled": false,
+ "indent": true,
+ "layout": "IPY_MODEL_8d106b78dab5441f9cf1e6bd3f7bdde0",
+ "style": "IPY_MODEL_57fcb83a2d754d77bbfa7365a3d9ba48",
+ "value": true
+ }
+ },
+ "f1daee1cb79345d5ae3d17c5a05c53fc": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "VBoxModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "VBoxModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "VBoxView",
+ "box_style": "",
+ "children": [
+ "IPY_MODEL_cf87b2a7a3cf4c9f9c3ccd5fd9b1f669",
+ "IPY_MODEL_044534d6450b47c98c0fb270ef46000e",
+ "IPY_MODEL_49d3ecb4668244ea98da8313f6347ed9",
+ "IPY_MODEL_c6f120038d844558826bc4db9640dd77"
],
- "metadata": {
- "id": "NNsfeCTD5KvD"
- }
+ "layout": "IPY_MODEL_2fe1d18543ec48a982ae760dfb2f72b6"
+ }
+ },
+ "fa50fe5baa1849deb810279aaa6bc328": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "ButtonModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "ButtonModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "ButtonView",
+ "button_style": "",
+ "description": "Login",
+ "disabled": false,
+ "icon": "",
+ "layout": "IPY_MODEL_9228744c40e848f0a068004cb793053e",
+ "style": "IPY_MODEL_a7ab20d37d2e4a739c4bc35a92057e9e",
+ "tooltip": ""
+ }
}
- ]
-}
\ No newline at end of file
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}