From 4470c5c7990f281154b8d6dd4d3e476178f9807d Mon Sep 17 00:00:00 2001 From: Seph Mard Date: Tue, 10 Sep 2024 11:45:09 -0400 Subject: [PATCH 1/2] Created using Colab --- colabs/intro/Intro_to_Weights_&_Biases.ipynb | 895 ++++++++++--------- 1 file changed, 472 insertions(+), 423 deletions(-) diff --git a/colabs/intro/Intro_to_Weights_&_Biases.ipynb b/colabs/intro/Intro_to_Weights_&_Biases.ipynb index e72f9265..59bfb155 100644 --- a/colabs/intro/Intro_to_Weights_&_Biases.ipynb +++ b/colabs/intro/Intro_to_Weights_&_Biases.ipynb @@ -1,425 +1,474 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\"Open\n", - "" - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "view-in-github", + "colab_type": "text" + }, + "source": [ + "\"Open" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Rq4f0AGDmKSq" + }, + "source": [ + "\"Open\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3L2G1PagmKSs" + }, + "source": [ + "\"Weights\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "NDZPJIGgmKSs" + }, + "source": [ + "Use [W&B](https://wandb.ai/site?utm_source=intro_colab&utm_medium=code&utm_campaign=intro) for machine learning experiment tracking, model checkpointing, collaboration with your team and more. See the full W&B Documentation [here](https://docs.wandb.ai/).\n", + "\n", + "In this notebook, you will create and track a machine learning experiment using a simple PyTorch model. By the end of the notebook, you will have an interactive project dashboard that you can share and customize with other members of your team. [View an example dashboard here](https://wandb.ai/wandb/wandb_example)." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "7ZvRVIRRmKSs" + }, + "source": [ + "## Prerequisites\n", + "\n", + "Install the W&B Python SDK and log in:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "T38ds1nTmKSs" + }, + "outputs": [], + "source": [ + "!pip install wandb -qU" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "OMiguvcEmKSt" + }, + "outputs": [], + "source": [ + "# Log in to your W&B account\n", + "import wandb\n", + "import random\n", + "import math" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "OaHvAUxemKSt" + }, + "outputs": [], + "source": [ + "wandb.login()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "R2wiGOZsmKSt" + }, + "source": [ + "## Simulate and track a machine learning experiment with W&B\n", + "\n", + "Create, track, and visualize a machine learning experiment. To do this:\n", + "\n", + "1. Initialize a [W&B run](https://docs.wandb.ai/guides/runs) and pass in the hyperparameters you want to track.\n", + "2. Within your training loop, log metrics such as the accuruacy and loss." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "t7E2GuxWmKSt" + }, + "outputs": [], + "source": [ + "import random\n", + "import math\n", + "\n", + "# Launch 5 simulated experiments\n", + "total_runs = 5\n", + "for run in range(total_runs):\n", + " # 1️. Start a new run to track this script\n", + " wandb.init(\n", + " # Set the project where this run will be logged\n", + " project=\"basic-intro\",\n", + " # We pass a run name (otherwise it’ll be randomly assigned, like sunshine-lollypop-10)\n", + " name=f\"experiment_{run}\",\n", + " # Track hyperparameters and run metadata\n", + " config={\n", + " \"learning_rate\": 0.02,\n", + " \"architecture\": \"CNN\",\n", + " \"dataset\": \"CIFAR-100\",\n", + " \"epochs\": 10,\n", + " })\n", + "\n", + " # This simple block simulates a training loop logging metrics\n", + " epochs = 10\n", + " offset = random.random() / 5\n", + " for epoch in range(2, epochs):\n", + " acc = 1 - 2 ** -epoch - random.random() / epoch - offset\n", + " loss = 2 ** -epoch + random.random() / epoch + offset\n", + "\n", + " # 2️. Log metrics from your script to W&B\n", + " wandb.log({\"acc\": acc, \"loss\": loss})\n", + "\n", + " # Mark the run as finished\n", + " wandb.finish()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HkDXvp6dmKSu" + }, + "source": [ + "View how your machine learning peformed in your W&B project. Copy and paste the URL link that is printed from the previous cell. The URL will redirect you to a W&B project that contains a dashboard showing graphs the show how\n", + "\n", + "The following image shows what a dashboard can look like:" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Pk4FdBDgmKSu" + }, + "source": [ + "![](https://i.imgur.com/Pell4Oo.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "u_lHzckrmKSu" + }, + "source": [ + "Now that we know how to integrate W&B into a psuedo machine learning training loop, let's track a machine learning experiment using a basic PyTorch neural network. The following code will also upload model checkpoints to W&B that you can then share with other teams in in your organization.\n", + "\n", + "## Track a machine learning experiment using Pytorch\n", + "\n", + "The following code cell defines and trains a simple MNIST classifier. During training, you will see W&B prints out URLs. Click on the project page link to see your results stream in live to a W&B project.\n", + "\n", + "W&B runs automatically log [metrics](https://docs.wandb.ai/ref/app/pages/run-page#charts-tab),\n", + "[system information](https://docs.wandb.ai/ref/app/pages/run-page#system-tab),\n", + "[hyperparameters](https://docs.wandb.ai/ref/app/pages/run-page#overview-tab),\n", + "[terminal output](https://docs.wandb.ai/ref/app/pages/run-page#logs-tab) and\n", + "you'll see an [interactive table](https://docs.wandb.ai/guides/data-vis)\n", + "with model inputs and outputs.\n", + "\n", + "### Set up PyTorch Dataloader\n", + "The following cell defines some useful functions that we will need to train our machine learning model. The functions themselves are not unique to W&B so we'll not cover them in detail here. See the PyTorch documentation for more information on how to define [forward and backward training loop](https://pytorch.org/tutorials/beginner/nn_tutorial.html), how to use [PyTorch DataLoaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html) to load data in for training, and how define PyTorch models using the [`torch.nn.Sequential` Class](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "7vov2WgBmKSu" + }, + "outputs": [], + "source": [ + "#@title\n", + "import torch, torchvision\n", + "import torch.nn as nn\n", + "from torchvision.datasets import MNIST\n", + "import torchvision.transforms as T\n", + "\n", + "MNIST.mirrors = [mirror for mirror in MNIST.mirrors if \"http://yann.lecun.com/\" not in mirror]\n", + "\n", + "device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n", + "\n", + "def get_dataloader(is_train, batch_size, slice=5):\n", + " \"Get a training dataloader\"\n", + " full_dataset = MNIST(root=\".\", train=is_train, transform=T.ToTensor(), download=True)\n", + " sub_dataset = torch.utils.data.Subset(full_dataset, indices=range(0, len(full_dataset), slice))\n", + " loader = torch.utils.data.DataLoader(dataset=sub_dataset,\n", + " batch_size=batch_size,\n", + " shuffle=True if is_train else False,\n", + " pin_memory=True, num_workers=2)\n", + " return loader\n", + "\n", + "def get_model(dropout):\n", + " \"A simple model\"\n", + " model = nn.Sequential(nn.Flatten(),\n", + " nn.Linear(28*28, 256),\n", + " nn.BatchNorm1d(256),\n", + " nn.ReLU(),\n", + " nn.Dropout(dropout),\n", + " nn.Linear(256,10)).to(device)\n", + " return model\n", + "\n", + "def validate_model(model, valid_dl, loss_func, log_images=False, batch_idx=0):\n", + " \"Compute performance of the model on the validation dataset and log a wandb.Table\"\n", + " model.eval()\n", + " val_loss = 0.\n", + " with torch.inference_mode():\n", + " correct = 0\n", + " for i, (images, labels) in enumerate(valid_dl):\n", + " images, labels = images.to(device), labels.to(device)\n", + "\n", + " # Forward pass ➡\n", + " outputs = model(images)\n", + " val_loss += loss_func(outputs, labels)*labels.size(0)\n", + "\n", + " # Compute accuracy and accumulate\n", + " _, predicted = torch.max(outputs.data, 1)\n", + " correct += (predicted == labels).sum().item()\n", + "\n", + " # Log one batch of images to the dashboard, always same batch_idx.\n", + " if i==batch_idx and log_images:\n", + " log_image_table(images, predicted, labels, outputs.softmax(dim=1))\n", + " return val_loss / len(valid_dl.dataset), correct / len(valid_dl.dataset)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "5pXEzC8PmKSu" + }, + "source": [ + "### Create a teble to compare the predicted values versus the true value\n", + "\n", + "The following cell is unique to W&B, so let's go over it.\n", + "\n", + "In the cell we define a function called `log_image_table`. Though technically, optional, this function creates a W&B Table object. We will use the table object to create a table that shows what the model predicted for each image.\n", + "\n", + "More specifically, each row will conists of the image fed to the model, along with predicted value and the actual value (label)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "z5QMEzkVmKSu" + }, + "outputs": [], + "source": [ + "def log_image_table(images, predicted, labels, probs):\n", + " \"Log a wandb.Table with (img, pred, target, scores)\"\n", + " # Create a wandb Table to log images, labels and predictions to\n", + " table = wandb.Table(columns=[\"image\", \"pred\", \"target\"]+[f\"score_{i}\" for i in range(10)])\n", + " for img, pred, targ, prob in zip(images.to(\"cpu\"), predicted.to(\"cpu\"), labels.to(\"cpu\"), probs.to(\"cpu\")):\n", + " table.add_data(wandb.Image(img[0].numpy()*255), pred, targ, *prob.numpy())\n", + " wandb.log({\"predictions_table\":table}, commit=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "PSERkx1imKSu" + }, + "source": [ + "### Train your model and upload checkpoints\n", + "\n", + "The following code trains and saves model checkpoints to your project. Use model checkpoints like you normally would to assess how the model performed during training.\n", + "\n", + "W&B also makes it easy to share your saved models and model checkpoints with other members of your team or organization. To learn how to share your model and model checkpoints with members outside of your team, see [W&B Registry](https://docs.wandb.ai/guides/registry)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "wZbZvZCRmKSu" + }, + "outputs": [], + "source": [ + "# Launch 3 experiments, trying different dropout rates\n", + "for _ in range(3):\n", + " # initialise a wandb run\n", + " wandb.init(\n", + " project=\"pytorch-intro\",\n", + " config={\n", + " \"epochs\": 5,\n", + " \"batch_size\": 128,\n", + " \"lr\": 1e-3,\n", + " \"dropout\": random.uniform(0.01, 0.80),\n", + " })\n", + "\n", + " # Copy your config\n", + " config = wandb.config\n", + "\n", + " # Get the data\n", + " train_dl = get_dataloader(is_train=True, batch_size=config.batch_size)\n", + " valid_dl = get_dataloader(is_train=False, batch_size=2*config.batch_size)\n", + " n_steps_per_epoch = math.ceil(len(train_dl.dataset) / config.batch_size)\n", + "\n", + " # A simple MLP model\n", + " model = get_model(config.dropout)\n", + "\n", + " # Make the loss and optimizer\n", + " loss_func = nn.CrossEntropyLoss()\n", + " optimizer = torch.optim.Adam(model.parameters(), lr=config.lr)\n", + "\n", + " # Training\n", + " example_ct = 0\n", + " step_ct = 0\n", + " for epoch in range(config.epochs):\n", + " model.train()\n", + " for step, (images, labels) in enumerate(train_dl):\n", + " images, labels = images.to(device), labels.to(device)\n", + "\n", + " outputs = model(images)\n", + " train_loss = loss_func(outputs, labels)\n", + " optimizer.zero_grad()\n", + " train_loss.backward()\n", + " optimizer.step()\n", + "\n", + " example_ct += len(images)\n", + " metrics = {\"train/train_loss\": train_loss,\n", + " \"train/epoch\": (step + 1 + (n_steps_per_epoch * epoch)) / n_steps_per_epoch,\n", + " \"train/example_ct\": example_ct}\n", + "\n", + " if step + 1 < n_steps_per_epoch:\n", + " # Log train metrics to wandb\n", + " wandb.log(metrics)\n", + "\n", + " step_ct += 1\n", + "\n", + " val_loss, accuracy = validate_model(model, valid_dl, loss_func, log_images=(epoch==(config.epochs-1)))\n", + "\n", + " # Log train and validation metrics to wandb\n", + " val_metrics = {\"val/val_loss\": val_loss,\n", + " \"val/val_accuracy\": accuracy}\n", + " wandb.log({**metrics, **val_metrics})\n", + "\n", + " # Save the model checkpoint to wandb\n", + " torch.save(model, \"my_model.pt\")\n", + " wandb.log_model(\"./my_model.pt\", \"my_mnist_model\", aliases=[f\"epoch-{epoch+1}_dropout-{round(wandb.config.dropout, 4)}\"])\n", + "\n", + " print(f\"Epoch: {epoch+1}, Train Loss: {train_loss:.3f}, Valid Loss: {val_loss:3f}, Accuracy: {accuracy:.2f}\")\n", + "\n", + " # If you had a test set, this is how you could log it as a Summary metric\n", + " wandb.summary['test_accuracy'] = 0.8\n", + "\n", + " # Close your wandb run\n", + " wandb.finish()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "YW04-qY1mKSu" + }, + "source": [ + "You have now trained your first model using W&B. Click on one of the links above to see your metrics and see your saved model checkpoints in the Artifacts tab in the W&B App UI" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4BmMm1Q0mKSu" + }, + "source": [ + "## (Optional) Set up a W&B Alert\n", + "\n", + "Create a [W&B Alerts](https://docs.wandb.ai/guides/track/alert) to send alerts to your Slack or email from your Python code.\n", + "\n", + "There are 2 steps to follow the first time you'd like to send a Slack or email alert, triggered from your code:\n", + "\n", + "1) Turn on Alerts in your W&B [User Settings](https://wandb.ai/settings)\n", + "2) Add `wandb.alert()` to your code. For example:\n", + "\n", + "```python\n", + "wandb.alert(\n", + " title=\"Low accuracy\",\n", + " text=f\"Accuracy is below the acceptable threshold\"\n", + ")\n", + "```\n", + "\n", + "The following cell shows a minimal example below to see how to use `wandb.alert`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "4TwdDjTKmKSv" + }, + "outputs": [], + "source": [ + "# Start a wandb run\n", + "wandb.init(project=\"pytorch-intro\")\n", + "\n", + "# Simulating a model training loop\n", + "acc_threshold = 0.3\n", + "for training_step in range(1000):\n", + "\n", + " # Generate a random number for accuracy\n", + " accuracy = round(random.random() + random.random(), 3)\n", + " print(f'Accuracy is: {accuracy}, {acc_threshold}')\n", + "\n", + " # Log accuracy to wandb\n", + " wandb.log({\"Accuracy\": accuracy})\n", + "\n", + " # If the accuracy is below the threshold, fire a W&B Alert and stop the run\n", + " if accuracy <= acc_threshold:\n", + " # Send the wandb Alert\n", + " wandb.alert(\n", + " title='Low Accuracy',\n", + " text=f'Accuracy {accuracy} at step {training_step} is below the acceptable theshold, {acc_threshold}',\n", + " )\n", + " print('Alert triggered')\n", + " break\n", + "\n", + "# Mark the run as finished (useful in Jupyter notebooks)\n", + "wandb.finish()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "s3qmwUodmKSv" + }, + "source": [ + "You can find the full docs for [W&B Alerts here](https://docs.wandb.ai/guides/track/alert).\n", + "\n", + "## Next steps\n", + "The next tutorial you will learn how to do hyperparameter optimization using W&B Sweeps:\n", + "[Hyperparameters sweeps using PyTorch](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/pytorch/Organizing_Hyperparameter_Sweeps_in_PyTorch_with_W%26B.ipynb)" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "provenance": [], + "toc_visible": true, + "include_colab_link": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\"Weights\n", - "" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Use [W&B](https://wandb.ai/site?utm_source=intro_colab&utm_medium=code&utm_campaign=intro) for machine learning experiment tracking, model checkpointing, collaboration with your team and more. See the full W&B Documentation [here](https://docs.wandb.ai/).\n", - "\n", - "In this notebook, you will create and track a machine learning experiment using a simple PyTorch model. By the end of the notebook, you will have an interactive project dashboard that you can share and customize with other members of your team. [View an example dashboard here](https://wandb.ai/wandb/wandb_example)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Prerequisites\n", - "\n", - "Install the W&B Python SDK and log in:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "!pip install wandb -qU" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Log in to your W&B account\n", - "import wandb\n", - "import random\n", - "import math\n", - "\n", - "# Use wandb-core, temporary for wandb's new backend \n", - "wandb.require(\"core\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "wandb.login()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Simulate and track a machine learning experiment with W&B\n", - "\n", - "Create, track, and visualize a machine learning experiment. To do this:\n", - "\n", - "1. Initialize a [W&B run](https://docs.wandb.ai/guides/runs) and pass in the hyperparameters you want to track.\n", - "2. Within your training loop, log metrics such as the accuruacy and loss." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import random\n", - "import math\n", - "\n", - "# Launch 5 simulated experiments\n", - "total_runs = 5\n", - "for run in range(total_runs):\n", - " # 1️. Start a new run to track this script\n", - " wandb.init(\n", - " # Set the project where this run will be logged\n", - " project=\"basic-intro\",\n", - " # We pass a run name (otherwise it’ll be randomly assigned, like sunshine-lollypop-10)\n", - " name=f\"experiment_{run}\",\n", - " # Track hyperparameters and run metadata\n", - " config={\n", - " \"learning_rate\": 0.02,\n", - " \"architecture\": \"CNN\",\n", - " \"dataset\": \"CIFAR-100\",\n", - " \"epochs\": 10,\n", - " })\n", - "\n", - " # This simple block simulates a training loop logging metrics\n", - " epochs = 10\n", - " offset = random.random() / 5\n", - " for epoch in range(2, epochs):\n", - " acc = 1 - 2 ** -epoch - random.random() / epoch - offset\n", - " loss = 2 ** -epoch + random.random() / epoch + offset\n", - "\n", - " # 2️. Log metrics from your script to W&B\n", - " wandb.log({\"acc\": acc, \"loss\": loss})\n", - "\n", - " # Mark the run as finished\n", - " wandb.finish()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "View how your machine learning peformed in your W&B project. Copy and paste the URL link that is printed from the previous cell. The URL will redirect you to a W&B project that contains a dashboard showing graphs the show how \n", - "\n", - "The following image shows what a dashboard can look like:" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![](https://i.imgur.com/Pell4Oo.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now that we know how to integrate W&B into a psuedo machine learning training loop, let's track a machine learning experiment using a basic PyTorch neural network. The following code will also upload model checkpoints to W&B that you can then share with other teams in in your organization.\n", - "\n", - "## Track a machine learning experiment using Pytorch\n", - "\n", - "The following code cell defines and trains a simple MNIST classifier. During training, you will see W&B prints out URLs. Click on the project page link to see your results stream in live to a W&B project.\n", - "\n", - "W&B runs automatically log [metrics](https://docs.wandb.ai/ref/app/pages/run-page#charts-tab),\n", - "[system information](https://docs.wandb.ai/ref/app/pages/run-page#system-tab),\n", - "[hyperparameters](https://docs.wandb.ai/ref/app/pages/run-page#overview-tab),\n", - "[terminal output](https://docs.wandb.ai/ref/app/pages/run-page#logs-tab) and\n", - "you'll see an [interactive table](https://docs.wandb.ai/guides/data-vis)\n", - "with model inputs and outputs. \n", - "\n", - "### Set up PyTorch Dataloader\n", - "The following cell defines some useful functions that we will need to train our machine learning model. The functions themselves are not unique to W&B so we'll not cover them in detail here. See the PyTorch documentation for more information on how to define [forward and backward training loop](https://pytorch.org/tutorials/beginner/nn_tutorial.html), how to use [PyTorch DataLoaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html) to load data in for training, and how define PyTorch models using the [`torch.nn.Sequential` Class](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html). " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#@title\n", - "import torch, torchvision\n", - "import torch.nn as nn\n", - "from torchvision.datasets import MNIST\n", - "import torchvision.transforms as T\n", - "\n", - "MNIST.mirrors = [mirror for mirror in MNIST.mirrors if \"http://yann.lecun.com/\" not in mirror]\n", - "\n", - "device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n", - "\n", - "def get_dataloader(is_train, batch_size, slice=5):\n", - " \"Get a training dataloader\"\n", - " full_dataset = MNIST(root=\".\", train=is_train, transform=T.ToTensor(), download=True)\n", - " sub_dataset = torch.utils.data.Subset(full_dataset, indices=range(0, len(full_dataset), slice))\n", - " loader = torch.utils.data.DataLoader(dataset=sub_dataset,\n", - " batch_size=batch_size,\n", - " shuffle=True if is_train else False,\n", - " pin_memory=True, num_workers=2)\n", - " return loader\n", - "\n", - "def get_model(dropout):\n", - " \"A simple model\"\n", - " model = nn.Sequential(nn.Flatten(),\n", - " nn.Linear(28*28, 256),\n", - " nn.BatchNorm1d(256),\n", - " nn.ReLU(),\n", - " nn.Dropout(dropout),\n", - " nn.Linear(256,10)).to(device)\n", - " return model\n", - "\n", - "def validate_model(model, valid_dl, loss_func, log_images=False, batch_idx=0):\n", - " \"Compute performance of the model on the validation dataset and log a wandb.Table\"\n", - " model.eval()\n", - " val_loss = 0.\n", - " with torch.inference_mode():\n", - " correct = 0\n", - " for i, (images, labels) in enumerate(valid_dl):\n", - " images, labels = images.to(device), labels.to(device)\n", - "\n", - " # Forward pass ➡\n", - " outputs = model(images)\n", - " val_loss += loss_func(outputs, labels)*labels.size(0)\n", - "\n", - " # Compute accuracy and accumulate\n", - " _, predicted = torch.max(outputs.data, 1)\n", - " correct += (predicted == labels).sum().item()\n", - "\n", - " # Log one batch of images to the dashboard, always same batch_idx.\n", - " if i==batch_idx and log_images:\n", - " log_image_table(images, predicted, labels, outputs.softmax(dim=1))\n", - " return val_loss / len(valid_dl.dataset), correct / len(valid_dl.dataset)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Create a teble to compare the predicted values versus the true value\n", - "\n", - "The following cell is unique to W&B, so let's go over it.\n", - "\n", - "In the cell we define a function called `log_image_table`. Though technically, optional, this function creates a W&B Table object. We will use the table object to create a table that shows what the model predicted for each image. \n", - "\n", - "More specifically, each row will conists of the image fed to the model, along with predicted value and the actual value (label). " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def log_image_table(images, predicted, labels, probs):\n", - " \"Log a wandb.Table with (img, pred, target, scores)\"\n", - " # Create a wandb Table to log images, labels and predictions to\n", - " table = wandb.Table(columns=[\"image\", \"pred\", \"target\"]+[f\"score_{i}\" for i in range(10)])\n", - " for img, pred, targ, prob in zip(images.to(\"cpu\"), predicted.to(\"cpu\"), labels.to(\"cpu\"), probs.to(\"cpu\")):\n", - " table.add_data(wandb.Image(img[0].numpy()*255), pred, targ, *prob.numpy())\n", - " wandb.log({\"predictions_table\":table}, commit=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Train your model and upload checkpoints\n", - "\n", - "The following code trains and saves model checkpoints to your project. Use model checkpoints like you normally would to assess how the model performed during training. \n", - "\n", - "W&B also makes it easy to share your saved models and model checkpoints with other members of your team or organization. To learn how to share your model and model checkpoints with members outside of your team, see [W&B Registry](https://docs.wandb.ai/guides/registry)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Launch 3 experiments, trying different dropout rates\n", - "for _ in range(3):\n", - " # initialise a wandb run\n", - " wandb.init(\n", - " project=\"pytorch-intro\",\n", - " config={\n", - " \"epochs\": 5,\n", - " \"batch_size\": 128,\n", - " \"lr\": 1e-3,\n", - " \"dropout\": random.uniform(0.01, 0.80),\n", - " })\n", - "\n", - " # Copy your config\n", - " config = wandb.config\n", - "\n", - " # Get the data\n", - " train_dl = get_dataloader(is_train=True, batch_size=config.batch_size)\n", - " valid_dl = get_dataloader(is_train=False, batch_size=2*config.batch_size)\n", - " n_steps_per_epoch = math.ceil(len(train_dl.dataset) / config.batch_size)\n", - "\n", - " # A simple MLP model\n", - " model = get_model(config.dropout)\n", - "\n", - " # Make the loss and optimizer\n", - " loss_func = nn.CrossEntropyLoss()\n", - " optimizer = torch.optim.Adam(model.parameters(), lr=config.lr)\n", - "\n", - " # Training\n", - " example_ct = 0\n", - " step_ct = 0\n", - " for epoch in range(config.epochs):\n", - " model.train()\n", - " for step, (images, labels) in enumerate(train_dl):\n", - " images, labels = images.to(device), labels.to(device)\n", - "\n", - " outputs = model(images)\n", - " train_loss = loss_func(outputs, labels)\n", - " optimizer.zero_grad()\n", - " train_loss.backward()\n", - " optimizer.step()\n", - "\n", - " example_ct += len(images)\n", - " metrics = {\"train/train_loss\": train_loss,\n", - " \"train/epoch\": (step + 1 + (n_steps_per_epoch * epoch)) / n_steps_per_epoch,\n", - " \"train/example_ct\": example_ct}\n", - "\n", - " if step + 1 < n_steps_per_epoch:\n", - " # Log train metrics to wandb\n", - " wandb.log(metrics)\n", - "\n", - " step_ct += 1\n", - "\n", - " val_loss, accuracy = validate_model(model, valid_dl, loss_func, log_images=(epoch==(config.epochs-1)))\n", - "\n", - " # Log train and validation metrics to wandb\n", - " val_metrics = {\"val/val_loss\": val_loss,\n", - " \"val/val_accuracy\": accuracy}\n", - " wandb.log({**metrics, **val_metrics})\n", - "\n", - " # Save the model checkpoint to wandb\n", - " torch.save(model, \"my_model.pt\")\n", - " wandb.log_model(\"./my_model.pt\", \"my_mnist_model\", aliases=[f\"epoch-{epoch+1}_dropout-{round(wandb.config.dropout, 4)}\"])\n", - "\n", - " print(f\"Epoch: {epoch+1}, Train Loss: {train_loss:.3f}, Valid Loss: {val_loss:3f}, Accuracy: {accuracy:.2f}\")\n", - "\n", - " # If you had a test set, this is how you could log it as a Summary metric\n", - " wandb.summary['test_accuracy'] = 0.8\n", - "\n", - " # Close your wandb run\n", - " wandb.finish()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You have now trained your first model using W&B. Click on one of the links above to see your metrics and see your saved model checkpoints in the Artifacts tab in the W&B App UI" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## (Optional) Set up a W&B Alert\n", - "\n", - "Create a [W&B Alerts](https://docs.wandb.ai/guides/track/alert) to send alerts to your Slack or email from your Python code. \n", - "\n", - "There are 2 steps to follow the first time you'd like to send a Slack or email alert, triggered from your code:\n", - "\n", - "1) Turn on Alerts in your W&B [User Settings](https://wandb.ai/settings)\n", - "2) Add `wandb.alert()` to your code. For example:\n", - "\n", - "```python\n", - "wandb.alert(\n", - " title=\"Low accuracy\",\n", - " text=f\"Accuracy is below the acceptable threshold\"\n", - ")\n", - "```\n", - "\n", - "The following cell shows a minimal example below to see how to use `wandb.alert`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Start a wandb run\n", - "wandb.init(project=\"pytorch-intro\")\n", - "\n", - "# Simulating a model training loop\n", - "acc_threshold = 0.3\n", - "for training_step in range(1000):\n", - "\n", - " # Generate a random number for accuracy\n", - " accuracy = round(random.random() + random.random(), 3)\n", - " print(f'Accuracy is: {accuracy}, {acc_threshold}')\n", - "\n", - " # Log accuracy to wandb\n", - " wandb.log({\"Accuracy\": accuracy})\n", - "\n", - " # If the accuracy is below the threshold, fire a W&B Alert and stop the run\n", - " if accuracy <= acc_threshold:\n", - " # Send the wandb Alert\n", - " wandb.alert(\n", - " title='Low Accuracy',\n", - " text=f'Accuracy {accuracy} at step {training_step} is below the acceptable theshold, {acc_threshold}',\n", - " )\n", - " print('Alert triggered')\n", - " break\n", - "\n", - "# Mark the run as finished (useful in Jupyter notebooks)\n", - "wandb.finish()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can find the full docs for [W&B Alerts here](https://docs.wandb.ai/guides/track/alert).\n", - "\n", - "## Next steps\n", - "The next tutorial you will learn how to do hyperparameter optimization using W&B Sweeps:\n", - "[Hyperparameters sweeps using PyTorch](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/pytorch/Organizing_Hyperparameter_Sweeps_in_PyTorch_with_W%26B.ipynb)" - ] - } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "include_colab_link": true, - "provenance": [], - "toc_visible": true - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file From ea5e3934652c56e34b86e4dceb925036259d9c22 Mon Sep 17 00:00:00 2001 From: GitHub Action Date: Tue, 10 Sep 2024 18:55:18 +0000 Subject: [PATCH 2/2] Auto-clean notebooks --- colabs/intro/Intro_to_Weights_&_Biases.ipynb | 900 +++++++++---------- 1 file changed, 428 insertions(+), 472 deletions(-) diff --git a/colabs/intro/Intro_to_Weights_&_Biases.ipynb b/colabs/intro/Intro_to_Weights_&_Biases.ipynb index 59bfb155..0536c27c 100644 --- a/colabs/intro/Intro_to_Weights_&_Biases.ipynb +++ b/colabs/intro/Intro_to_Weights_&_Biases.ipynb @@ -1,474 +1,430 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "view-in-github", - "colab_type": "text" - }, - "source": [ - "\"Open" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Rq4f0AGDmKSq" - }, - "source": [ - "\"Open\n", - "" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3L2G1PagmKSs" - }, - "source": [ - "\"Weights\n", - "" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "NDZPJIGgmKSs" - }, - "source": [ - "Use [W&B](https://wandb.ai/site?utm_source=intro_colab&utm_medium=code&utm_campaign=intro) for machine learning experiment tracking, model checkpointing, collaboration with your team and more. See the full W&B Documentation [here](https://docs.wandb.ai/).\n", - "\n", - "In this notebook, you will create and track a machine learning experiment using a simple PyTorch model. By the end of the notebook, you will have an interactive project dashboard that you can share and customize with other members of your team. [View an example dashboard here](https://wandb.ai/wandb/wandb_example)." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "7ZvRVIRRmKSs" - }, - "source": [ - "## Prerequisites\n", - "\n", - "Install the W&B Python SDK and log in:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "T38ds1nTmKSs" - }, - "outputs": [], - "source": [ - "!pip install wandb -qU" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "OMiguvcEmKSt" - }, - "outputs": [], - "source": [ - "# Log in to your W&B account\n", - "import wandb\n", - "import random\n", - "import math" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "OaHvAUxemKSt" - }, - "outputs": [], - "source": [ - "wandb.login()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "R2wiGOZsmKSt" - }, - "source": [ - "## Simulate and track a machine learning experiment with W&B\n", - "\n", - "Create, track, and visualize a machine learning experiment. To do this:\n", - "\n", - "1. Initialize a [W&B run](https://docs.wandb.ai/guides/runs) and pass in the hyperparameters you want to track.\n", - "2. Within your training loop, log metrics such as the accuruacy and loss." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "t7E2GuxWmKSt" - }, - "outputs": [], - "source": [ - "import random\n", - "import math\n", - "\n", - "# Launch 5 simulated experiments\n", - "total_runs = 5\n", - "for run in range(total_runs):\n", - " # 1️. Start a new run to track this script\n", - " wandb.init(\n", - " # Set the project where this run will be logged\n", - " project=\"basic-intro\",\n", - " # We pass a run name (otherwise it’ll be randomly assigned, like sunshine-lollypop-10)\n", - " name=f\"experiment_{run}\",\n", - " # Track hyperparameters and run metadata\n", - " config={\n", - " \"learning_rate\": 0.02,\n", - " \"architecture\": \"CNN\",\n", - " \"dataset\": \"CIFAR-100\",\n", - " \"epochs\": 10,\n", - " })\n", - "\n", - " # This simple block simulates a training loop logging metrics\n", - " epochs = 10\n", - " offset = random.random() / 5\n", - " for epoch in range(2, epochs):\n", - " acc = 1 - 2 ** -epoch - random.random() / epoch - offset\n", - " loss = 2 ** -epoch + random.random() / epoch + offset\n", - "\n", - " # 2️. Log metrics from your script to W&B\n", - " wandb.log({\"acc\": acc, \"loss\": loss})\n", - "\n", - " # Mark the run as finished\n", - " wandb.finish()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "HkDXvp6dmKSu" - }, - "source": [ - "View how your machine learning peformed in your W&B project. Copy and paste the URL link that is printed from the previous cell. The URL will redirect you to a W&B project that contains a dashboard showing graphs the show how\n", - "\n", - "The following image shows what a dashboard can look like:" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Pk4FdBDgmKSu" - }, - "source": [ - "![](https://i.imgur.com/Pell4Oo.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "u_lHzckrmKSu" - }, - "source": [ - "Now that we know how to integrate W&B into a psuedo machine learning training loop, let's track a machine learning experiment using a basic PyTorch neural network. The following code will also upload model checkpoints to W&B that you can then share with other teams in in your organization.\n", - "\n", - "## Track a machine learning experiment using Pytorch\n", - "\n", - "The following code cell defines and trains a simple MNIST classifier. During training, you will see W&B prints out URLs. Click on the project page link to see your results stream in live to a W&B project.\n", - "\n", - "W&B runs automatically log [metrics](https://docs.wandb.ai/ref/app/pages/run-page#charts-tab),\n", - "[system information](https://docs.wandb.ai/ref/app/pages/run-page#system-tab),\n", - "[hyperparameters](https://docs.wandb.ai/ref/app/pages/run-page#overview-tab),\n", - "[terminal output](https://docs.wandb.ai/ref/app/pages/run-page#logs-tab) and\n", - "you'll see an [interactive table](https://docs.wandb.ai/guides/data-vis)\n", - "with model inputs and outputs.\n", - "\n", - "### Set up PyTorch Dataloader\n", - "The following cell defines some useful functions that we will need to train our machine learning model. The functions themselves are not unique to W&B so we'll not cover them in detail here. See the PyTorch documentation for more information on how to define [forward and backward training loop](https://pytorch.org/tutorials/beginner/nn_tutorial.html), how to use [PyTorch DataLoaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html) to load data in for training, and how define PyTorch models using the [`torch.nn.Sequential` Class](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "7vov2WgBmKSu" - }, - "outputs": [], - "source": [ - "#@title\n", - "import torch, torchvision\n", - "import torch.nn as nn\n", - "from torchvision.datasets import MNIST\n", - "import torchvision.transforms as T\n", - "\n", - "MNIST.mirrors = [mirror for mirror in MNIST.mirrors if \"http://yann.lecun.com/\" not in mirror]\n", - "\n", - "device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n", - "\n", - "def get_dataloader(is_train, batch_size, slice=5):\n", - " \"Get a training dataloader\"\n", - " full_dataset = MNIST(root=\".\", train=is_train, transform=T.ToTensor(), download=True)\n", - " sub_dataset = torch.utils.data.Subset(full_dataset, indices=range(0, len(full_dataset), slice))\n", - " loader = torch.utils.data.DataLoader(dataset=sub_dataset,\n", - " batch_size=batch_size,\n", - " shuffle=True if is_train else False,\n", - " pin_memory=True, num_workers=2)\n", - " return loader\n", - "\n", - "def get_model(dropout):\n", - " \"A simple model\"\n", - " model = nn.Sequential(nn.Flatten(),\n", - " nn.Linear(28*28, 256),\n", - " nn.BatchNorm1d(256),\n", - " nn.ReLU(),\n", - " nn.Dropout(dropout),\n", - " nn.Linear(256,10)).to(device)\n", - " return model\n", - "\n", - "def validate_model(model, valid_dl, loss_func, log_images=False, batch_idx=0):\n", - " \"Compute performance of the model on the validation dataset and log a wandb.Table\"\n", - " model.eval()\n", - " val_loss = 0.\n", - " with torch.inference_mode():\n", - " correct = 0\n", - " for i, (images, labels) in enumerate(valid_dl):\n", - " images, labels = images.to(device), labels.to(device)\n", - "\n", - " # Forward pass ➡\n", - " outputs = model(images)\n", - " val_loss += loss_func(outputs, labels)*labels.size(0)\n", - "\n", - " # Compute accuracy and accumulate\n", - " _, predicted = torch.max(outputs.data, 1)\n", - " correct += (predicted == labels).sum().item()\n", - "\n", - " # Log one batch of images to the dashboard, always same batch_idx.\n", - " if i==batch_idx and log_images:\n", - " log_image_table(images, predicted, labels, outputs.softmax(dim=1))\n", - " return val_loss / len(valid_dl.dataset), correct / len(valid_dl.dataset)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "5pXEzC8PmKSu" - }, - "source": [ - "### Create a teble to compare the predicted values versus the true value\n", - "\n", - "The following cell is unique to W&B, so let's go over it.\n", - "\n", - "In the cell we define a function called `log_image_table`. Though technically, optional, this function creates a W&B Table object. We will use the table object to create a table that shows what the model predicted for each image.\n", - "\n", - "More specifically, each row will conists of the image fed to the model, along with predicted value and the actual value (label)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "z5QMEzkVmKSu" - }, - "outputs": [], - "source": [ - "def log_image_table(images, predicted, labels, probs):\n", - " \"Log a wandb.Table with (img, pred, target, scores)\"\n", - " # Create a wandb Table to log images, labels and predictions to\n", - " table = wandb.Table(columns=[\"image\", \"pred\", \"target\"]+[f\"score_{i}\" for i in range(10)])\n", - " for img, pred, targ, prob in zip(images.to(\"cpu\"), predicted.to(\"cpu\"), labels.to(\"cpu\"), probs.to(\"cpu\")):\n", - " table.add_data(wandb.Image(img[0].numpy()*255), pred, targ, *prob.numpy())\n", - " wandb.log({\"predictions_table\":table}, commit=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "PSERkx1imKSu" - }, - "source": [ - "### Train your model and upload checkpoints\n", - "\n", - "The following code trains and saves model checkpoints to your project. Use model checkpoints like you normally would to assess how the model performed during training.\n", - "\n", - "W&B also makes it easy to share your saved models and model checkpoints with other members of your team or organization. To learn how to share your model and model checkpoints with members outside of your team, see [W&B Registry](https://docs.wandb.ai/guides/registry)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "wZbZvZCRmKSu" - }, - "outputs": [], - "source": [ - "# Launch 3 experiments, trying different dropout rates\n", - "for _ in range(3):\n", - " # initialise a wandb run\n", - " wandb.init(\n", - " project=\"pytorch-intro\",\n", - " config={\n", - " \"epochs\": 5,\n", - " \"batch_size\": 128,\n", - " \"lr\": 1e-3,\n", - " \"dropout\": random.uniform(0.01, 0.80),\n", - " })\n", - "\n", - " # Copy your config\n", - " config = wandb.config\n", - "\n", - " # Get the data\n", - " train_dl = get_dataloader(is_train=True, batch_size=config.batch_size)\n", - " valid_dl = get_dataloader(is_train=False, batch_size=2*config.batch_size)\n", - " n_steps_per_epoch = math.ceil(len(train_dl.dataset) / config.batch_size)\n", - "\n", - " # A simple MLP model\n", - " model = get_model(config.dropout)\n", - "\n", - " # Make the loss and optimizer\n", - " loss_func = nn.CrossEntropyLoss()\n", - " optimizer = torch.optim.Adam(model.parameters(), lr=config.lr)\n", - "\n", - " # Training\n", - " example_ct = 0\n", - " step_ct = 0\n", - " for epoch in range(config.epochs):\n", - " model.train()\n", - " for step, (images, labels) in enumerate(train_dl):\n", - " images, labels = images.to(device), labels.to(device)\n", - "\n", - " outputs = model(images)\n", - " train_loss = loss_func(outputs, labels)\n", - " optimizer.zero_grad()\n", - " train_loss.backward()\n", - " optimizer.step()\n", - "\n", - " example_ct += len(images)\n", - " metrics = {\"train/train_loss\": train_loss,\n", - " \"train/epoch\": (step + 1 + (n_steps_per_epoch * epoch)) / n_steps_per_epoch,\n", - " \"train/example_ct\": example_ct}\n", - "\n", - " if step + 1 < n_steps_per_epoch:\n", - " # Log train metrics to wandb\n", - " wandb.log(metrics)\n", - "\n", - " step_ct += 1\n", - "\n", - " val_loss, accuracy = validate_model(model, valid_dl, loss_func, log_images=(epoch==(config.epochs-1)))\n", - "\n", - " # Log train and validation metrics to wandb\n", - " val_metrics = {\"val/val_loss\": val_loss,\n", - " \"val/val_accuracy\": accuracy}\n", - " wandb.log({**metrics, **val_metrics})\n", - "\n", - " # Save the model checkpoint to wandb\n", - " torch.save(model, \"my_model.pt\")\n", - " wandb.log_model(\"./my_model.pt\", \"my_mnist_model\", aliases=[f\"epoch-{epoch+1}_dropout-{round(wandb.config.dropout, 4)}\"])\n", - "\n", - " print(f\"Epoch: {epoch+1}, Train Loss: {train_loss:.3f}, Valid Loss: {val_loss:3f}, Accuracy: {accuracy:.2f}\")\n", - "\n", - " # If you had a test set, this is how you could log it as a Summary metric\n", - " wandb.summary['test_accuracy'] = 0.8\n", - "\n", - " # Close your wandb run\n", - " wandb.finish()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "YW04-qY1mKSu" - }, - "source": [ - "You have now trained your first model using W&B. Click on one of the links above to see your metrics and see your saved model checkpoints in the Artifacts tab in the W&B App UI" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "4BmMm1Q0mKSu" - }, - "source": [ - "## (Optional) Set up a W&B Alert\n", - "\n", - "Create a [W&B Alerts](https://docs.wandb.ai/guides/track/alert) to send alerts to your Slack or email from your Python code.\n", - "\n", - "There are 2 steps to follow the first time you'd like to send a Slack or email alert, triggered from your code:\n", - "\n", - "1) Turn on Alerts in your W&B [User Settings](https://wandb.ai/settings)\n", - "2) Add `wandb.alert()` to your code. For example:\n", - "\n", - "```python\n", - "wandb.alert(\n", - " title=\"Low accuracy\",\n", - " text=f\"Accuracy is below the acceptable threshold\"\n", - ")\n", - "```\n", - "\n", - "The following cell shows a minimal example below to see how to use `wandb.alert`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "4TwdDjTKmKSv" - }, - "outputs": [], - "source": [ - "# Start a wandb run\n", - "wandb.init(project=\"pytorch-intro\")\n", - "\n", - "# Simulating a model training loop\n", - "acc_threshold = 0.3\n", - "for training_step in range(1000):\n", - "\n", - " # Generate a random number for accuracy\n", - " accuracy = round(random.random() + random.random(), 3)\n", - " print(f'Accuracy is: {accuracy}, {acc_threshold}')\n", - "\n", - " # Log accuracy to wandb\n", - " wandb.log({\"Accuracy\": accuracy})\n", - "\n", - " # If the accuracy is below the threshold, fire a W&B Alert and stop the run\n", - " if accuracy <= acc_threshold:\n", - " # Send the wandb Alert\n", - " wandb.alert(\n", - " title='Low Accuracy',\n", - " text=f'Accuracy {accuracy} at step {training_step} is below the acceptable theshold, {acc_threshold}',\n", - " )\n", - " print('Alert triggered')\n", - " break\n", - "\n", - "# Mark the run as finished (useful in Jupyter notebooks)\n", - "wandb.finish()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "s3qmwUodmKSv" - }, - "source": [ - "You can find the full docs for [W&B Alerts here](https://docs.wandb.ai/guides/track/alert).\n", - "\n", - "## Next steps\n", - "The next tutorial you will learn how to do hyperparameter optimization using W&B Sweeps:\n", - "[Hyperparameters sweeps using PyTorch](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/pytorch/Organizing_Hyperparameter_Sweeps_in_PyTorch_with_W%26B.ipynb)" - ] - } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "provenance": [], - "toc_visible": true, - "include_colab_link": true - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Open\n", + "" + ] }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Open\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Weights\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Use [W&B](https://wandb.ai/site?utm_source=intro_colab&utm_medium=code&utm_campaign=intro) for machine learning experiment tracking, model checkpointing, collaboration with your team and more. See the full W&B Documentation [here](https://docs.wandb.ai/).\n", + "\n", + "In this notebook, you will create and track a machine learning experiment using a simple PyTorch model. By the end of the notebook, you will have an interactive project dashboard that you can share and customize with other members of your team. [View an example dashboard here](https://wandb.ai/wandb/wandb_example)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "\n", + "Install the W&B Python SDK and log in:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!pip install wandb -qU" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Log in to your W&B account\n", + "import wandb\n", + "import random\n", + "import math" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "wandb.login()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Simulate and track a machine learning experiment with W&B\n", + "\n", + "Create, track, and visualize a machine learning experiment. To do this:\n", + "\n", + "1. Initialize a [W&B run](https://docs.wandb.ai/guides/runs) and pass in the hyperparameters you want to track.\n", + "2. Within your training loop, log metrics such as the accuruacy and loss." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import random\n", + "import math\n", + "\n", + "# Launch 5 simulated experiments\n", + "total_runs = 5\n", + "for run in range(total_runs):\n", + " # 1️. Start a new run to track this script\n", + " wandb.init(\n", + " # Set the project where this run will be logged\n", + " project=\"basic-intro\",\n", + " # We pass a run name (otherwise it’ll be randomly assigned, like sunshine-lollypop-10)\n", + " name=f\"experiment_{run}\",\n", + " # Track hyperparameters and run metadata\n", + " config={\n", + " \"learning_rate\": 0.02,\n", + " \"architecture\": \"CNN\",\n", + " \"dataset\": \"CIFAR-100\",\n", + " \"epochs\": 10,\n", + " })\n", + "\n", + " # This simple block simulates a training loop logging metrics\n", + " epochs = 10\n", + " offset = random.random() / 5\n", + " for epoch in range(2, epochs):\n", + " acc = 1 - 2 ** -epoch - random.random() / epoch - offset\n", + " loss = 2 ** -epoch + random.random() / epoch + offset\n", + "\n", + " # 2️. Log metrics from your script to W&B\n", + " wandb.log({\"acc\": acc, \"loss\": loss})\n", + "\n", + " # Mark the run as finished\n", + " wandb.finish()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "View how your machine learning peformed in your W&B project. Copy and paste the URL link that is printed from the previous cell. The URL will redirect you to a W&B project that contains a dashboard showing graphs the show how\n", + "\n", + "The following image shows what a dashboard can look like:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![](https://i.imgur.com/Pell4Oo.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now that we know how to integrate W&B into a psuedo machine learning training loop, let's track a machine learning experiment using a basic PyTorch neural network. The following code will also upload model checkpoints to W&B that you can then share with other teams in in your organization.\n", + "\n", + "## Track a machine learning experiment using Pytorch\n", + "\n", + "The following code cell defines and trains a simple MNIST classifier. During training, you will see W&B prints out URLs. Click on the project page link to see your results stream in live to a W&B project.\n", + "\n", + "W&B runs automatically log [metrics](https://docs.wandb.ai/ref/app/pages/run-page#charts-tab),\n", + "[system information](https://docs.wandb.ai/ref/app/pages/run-page#system-tab),\n", + "[hyperparameters](https://docs.wandb.ai/ref/app/pages/run-page#overview-tab),\n", + "[terminal output](https://docs.wandb.ai/ref/app/pages/run-page#logs-tab) and\n", + "you'll see an [interactive table](https://docs.wandb.ai/guides/data-vis)\n", + "with model inputs and outputs.\n", + "\n", + "### Set up PyTorch Dataloader\n", + "The following cell defines some useful functions that we will need to train our machine learning model. The functions themselves are not unique to W&B so we'll not cover them in detail here. See the PyTorch documentation for more information on how to define [forward and backward training loop](https://pytorch.org/tutorials/beginner/nn_tutorial.html), how to use [PyTorch DataLoaders](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html) to load data in for training, and how define PyTorch models using the [`torch.nn.Sequential` Class](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "#@title\n", + "import torch, torchvision\n", + "import torch.nn as nn\n", + "from torchvision.datasets import MNIST\n", + "import torchvision.transforms as T\n", + "\n", + "MNIST.mirrors = [mirror for mirror in MNIST.mirrors if \"http://yann.lecun.com/\" not in mirror]\n", + "\n", + "device = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\n", + "\n", + "def get_dataloader(is_train, batch_size, slice=5):\n", + " \"Get a training dataloader\"\n", + " full_dataset = MNIST(root=\".\", train=is_train, transform=T.ToTensor(), download=True)\n", + " sub_dataset = torch.utils.data.Subset(full_dataset, indices=range(0, len(full_dataset), slice))\n", + " loader = torch.utils.data.DataLoader(dataset=sub_dataset,\n", + " batch_size=batch_size,\n", + " shuffle=True if is_train else False,\n", + " pin_memory=True, num_workers=2)\n", + " return loader\n", + "\n", + "def get_model(dropout):\n", + " \"A simple model\"\n", + " model = nn.Sequential(nn.Flatten(),\n", + " nn.Linear(28*28, 256),\n", + " nn.BatchNorm1d(256),\n", + " nn.ReLU(),\n", + " nn.Dropout(dropout),\n", + " nn.Linear(256,10)).to(device)\n", + " return model\n", + "\n", + "def validate_model(model, valid_dl, loss_func, log_images=False, batch_idx=0):\n", + " \"Compute performance of the model on the validation dataset and log a wandb.Table\"\n", + " model.eval()\n", + " val_loss = 0.\n", + " with torch.inference_mode():\n", + " correct = 0\n", + " for i, (images, labels) in enumerate(valid_dl):\n", + " images, labels = images.to(device), labels.to(device)\n", + "\n", + " # Forward pass ➡\n", + " outputs = model(images)\n", + " val_loss += loss_func(outputs, labels)*labels.size(0)\n", + "\n", + " # Compute accuracy and accumulate\n", + " _, predicted = torch.max(outputs.data, 1)\n", + " correct += (predicted == labels).sum().item()\n", + "\n", + " # Log one batch of images to the dashboard, always same batch_idx.\n", + " if i==batch_idx and log_images:\n", + " log_image_table(images, predicted, labels, outputs.softmax(dim=1))\n", + " return val_loss / len(valid_dl.dataset), correct / len(valid_dl.dataset)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a teble to compare the predicted values versus the true value\n", + "\n", + "The following cell is unique to W&B, so let's go over it.\n", + "\n", + "In the cell we define a function called `log_image_table`. Though technically, optional, this function creates a W&B Table object. We will use the table object to create a table that shows what the model predicted for each image.\n", + "\n", + "More specifically, each row will conists of the image fed to the model, along with predicted value and the actual value (label)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def log_image_table(images, predicted, labels, probs):\n", + " \"Log a wandb.Table with (img, pred, target, scores)\"\n", + " # Create a wandb Table to log images, labels and predictions to\n", + " table = wandb.Table(columns=[\"image\", \"pred\", \"target\"]+[f\"score_{i}\" for i in range(10)])\n", + " for img, pred, targ, prob in zip(images.to(\"cpu\"), predicted.to(\"cpu\"), labels.to(\"cpu\"), probs.to(\"cpu\")):\n", + " table.add_data(wandb.Image(img[0].numpy()*255), pred, targ, *prob.numpy())\n", + " wandb.log({\"predictions_table\":table}, commit=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Train your model and upload checkpoints\n", + "\n", + "The following code trains and saves model checkpoints to your project. Use model checkpoints like you normally would to assess how the model performed during training.\n", + "\n", + "W&B also makes it easy to share your saved models and model checkpoints with other members of your team or organization. To learn how to share your model and model checkpoints with members outside of your team, see [W&B Registry](https://docs.wandb.ai/guides/registry)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Launch 3 experiments, trying different dropout rates\n", + "for _ in range(3):\n", + " # initialise a wandb run\n", + " wandb.init(\n", + " project=\"pytorch-intro\",\n", + " config={\n", + " \"epochs\": 5,\n", + " \"batch_size\": 128,\n", + " \"lr\": 1e-3,\n", + " \"dropout\": random.uniform(0.01, 0.80),\n", + " })\n", + "\n", + " # Copy your config\n", + " config = wandb.config\n", + "\n", + " # Get the data\n", + " train_dl = get_dataloader(is_train=True, batch_size=config.batch_size)\n", + " valid_dl = get_dataloader(is_train=False, batch_size=2*config.batch_size)\n", + " n_steps_per_epoch = math.ceil(len(train_dl.dataset) / config.batch_size)\n", + "\n", + " # A simple MLP model\n", + " model = get_model(config.dropout)\n", + "\n", + " # Make the loss and optimizer\n", + " loss_func = nn.CrossEntropyLoss()\n", + " optimizer = torch.optim.Adam(model.parameters(), lr=config.lr)\n", + "\n", + " # Training\n", + " example_ct = 0\n", + " step_ct = 0\n", + " for epoch in range(config.epochs):\n", + " model.train()\n", + " for step, (images, labels) in enumerate(train_dl):\n", + " images, labels = images.to(device), labels.to(device)\n", + "\n", + " outputs = model(images)\n", + " train_loss = loss_func(outputs, labels)\n", + " optimizer.zero_grad()\n", + " train_loss.backward()\n", + " optimizer.step()\n", + "\n", + " example_ct += len(images)\n", + " metrics = {\"train/train_loss\": train_loss,\n", + " \"train/epoch\": (step + 1 + (n_steps_per_epoch * epoch)) / n_steps_per_epoch,\n", + " \"train/example_ct\": example_ct}\n", + "\n", + " if step + 1 < n_steps_per_epoch:\n", + " # Log train metrics to wandb\n", + " wandb.log(metrics)\n", + "\n", + " step_ct += 1\n", + "\n", + " val_loss, accuracy = validate_model(model, valid_dl, loss_func, log_images=(epoch==(config.epochs-1)))\n", + "\n", + " # Log train and validation metrics to wandb\n", + " val_metrics = {\"val/val_loss\": val_loss,\n", + " \"val/val_accuracy\": accuracy}\n", + " wandb.log({**metrics, **val_metrics})\n", + "\n", + " # Save the model checkpoint to wandb\n", + " torch.save(model, \"my_model.pt\")\n", + " wandb.log_model(\"./my_model.pt\", \"my_mnist_model\", aliases=[f\"epoch-{epoch+1}_dropout-{round(wandb.config.dropout, 4)}\"])\n", + "\n", + " print(f\"Epoch: {epoch+1}, Train Loss: {train_loss:.3f}, Valid Loss: {val_loss:3f}, Accuracy: {accuracy:.2f}\")\n", + "\n", + " # If you had a test set, this is how you could log it as a Summary metric\n", + " wandb.summary['test_accuracy'] = 0.8\n", + "\n", + " # Close your wandb run\n", + " wandb.finish()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You have now trained your first model using W&B. Click on one of the links above to see your metrics and see your saved model checkpoints in the Artifacts tab in the W&B App UI" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## (Optional) Set up a W&B Alert\n", + "\n", + "Create a [W&B Alerts](https://docs.wandb.ai/guides/track/alert) to send alerts to your Slack or email from your Python code.\n", + "\n", + "There are 2 steps to follow the first time you'd like to send a Slack or email alert, triggered from your code:\n", + "\n", + "1) Turn on Alerts in your W&B [User Settings](https://wandb.ai/settings)\n", + "2) Add `wandb.alert()` to your code. For example:\n", + "\n", + "```python\n", + "wandb.alert(\n", + " title=\"Low accuracy\",\n", + " text=f\"Accuracy is below the acceptable threshold\"\n", + ")\n", + "```\n", + "\n", + "The following cell shows a minimal example below to see how to use `wandb.alert`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Start a wandb run\n", + "wandb.init(project=\"pytorch-intro\")\n", + "\n", + "# Simulating a model training loop\n", + "acc_threshold = 0.3\n", + "for training_step in range(1000):\n", + "\n", + " # Generate a random number for accuracy\n", + " accuracy = round(random.random() + random.random(), 3)\n", + " print(f'Accuracy is: {accuracy}, {acc_threshold}')\n", + "\n", + " # Log accuracy to wandb\n", + " wandb.log({\"Accuracy\": accuracy})\n", + "\n", + " # If the accuracy is below the threshold, fire a W&B Alert and stop the run\n", + " if accuracy <= acc_threshold:\n", + " # Send the wandb Alert\n", + " wandb.alert(\n", + " title='Low Accuracy',\n", + " text=f'Accuracy {accuracy} at step {training_step} is below the acceptable theshold, {acc_threshold}',\n", + " )\n", + " print('Alert triggered')\n", + " break\n", + "\n", + "# Mark the run as finished (useful in Jupyter notebooks)\n", + "wandb.finish()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can find the full docs for [W&B Alerts here](https://docs.wandb.ai/guides/track/alert).\n", + "\n", + "## Next steps\n", + "The next tutorial you will learn how to do hyperparameter optimization using W&B Sweeps:\n", + "[Hyperparameters sweeps using PyTorch](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/pytorch/Organizing_Hyperparameter_Sweeps_in_PyTorch_with_W%26B.ipynb)" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "include_colab_link": true, + "provenance": [], + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +}