diff --git a/colabs/raytune/RayTune_with_wandb.ipynb b/colabs/raytune/RayTune_with_wandb.ipynb index 00dc6caf..3ad887e5 100644 --- a/colabs/raytune/RayTune_with_wandb.ipynb +++ b/colabs/raytune/RayTune_with_wandb.ipynb @@ -1,409 +1,340 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "CHj4COZvzWta" - }, - "source": [ - "\"Open\n", - "" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "f5SyUYlHzWtd" - }, - "source": [ - "\"Weights\n", - "\n", - "\n", - "\n", - "Use Weights & Biases for machine learning experiment tracking, dataset versioning, and project collaboration.\n", - "\n", - "
\n", - "\n", - "\"Weights\n", - "\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "oQYw5uVFzWtd" - }, - "source": [ - "\n", - "# ๐ŸŒž Ray/Tune and ๐Ÿ‹๏ธโ€โ™€๏ธ Weights & Biases\n", - "\n", - "Both Weights and Biases and Ray/Tune are built for scale and handle millions of models every month for teams doing some of the most cutting-edge deep learning research.\n", - "\n", - "[W&B](https://wandb.com) is a toolkit with everything you need to track, reproduce, and gain insights from your models easily; [Ray/Tune](https://docs.ray.io/en/latest/tune/) provides a simple interface for scaling and running distributed experiments.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "SFI8z2gWzWte" - }, - "source": [ - "\n", - "### ๐Ÿค They're a natural match! ๐Ÿค\n", - "\n", - "Here's just a few reasons why our community likes Ray/Tune โ€“\n", - "\n", - "* **Simple distributed execution**: Ray/Tune makes it easy to scale all the way from a single node on a laptop, through to multiple GPUs, and up to multiple nodes on multiple machines.\n", - "* **State-of-the-art algorithms**: Ray/Tune has tested implementations of a huge number of potent scheduling algorithms including\n", - "[Population-Based Training](https://docs.ray.io/en/latest/tune/tutorials/tune-advanced-tutorial.html),\n", - "[ASHA](https://docs.ray.io/en/master/tune/tutorials/tune-tutorial.html#early-stopping-with-asha),\n", - "and\n", - "[HyperBand](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#hyperband-tune-schedulers-hyperbandscheduler)\n", - "* **Method agnostic**: Ray/Tune works across deep learning frameworks (including PyTorch, Keras, Tensorflow, and PyTorchLightning) and with other ML methods like gradient-boosted trees (XGBoost, LightGBM)\n", - "* **Fault-tolerance**: Ray/Tune is built on top of Ray, providing tolerance for failed runs out of the box.\n", - "\n", - "This Colab demonstrates how this integration works for a simple grid search over two hyperparameters. If you've got any questions about the details,\n", - "check out\n", - "[our documentation](https://docs.wandb.com/library/integrations/ray-tune)\n", - "or the\n", - "[documentation for Ray/Tune](https://docs.ray.io/en/master/tune/api_docs/integration.html#weights-and-biases-tune-integration-wandb).\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "Hw1mGxhlzWte" - }, - "source": [ - "W&B integrates with `ray.tune` by offering two lightweight standalone integrations:\n", - "\n", - "1. For simple cases, `WandbLoggerCallback` automatically logs metrics reported to Tune to W&B, along with the configuration of the experiment, using Tune's [`logger` interface](https://docs.ray.io/en/latest/tune/api_docs/logging.html).\n", - "2. The `@wandb_mixin` decorator gives you greater control over logging by letting you call `wandb.log` inside the decorated function, allowing you to [log custom metrics, plots, and other outputs, like media](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/wandb-log/Log_(Almost)_Anything_with_W%26B_Media.ipynb).\n", - "\n", - "These methods can be used together or independently.\n", - "\n", - "The example below demonstrates how they can be used together." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "qRrP4G23zWte" - }, - "source": [ - "# ๐Ÿงน Running a hyperparameter sweep with W&B and Ray/Tune" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "QfDVmu9szWte" - }, - "source": [ - "## ๐Ÿ“ฅ Install, `import`, and set seeds" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XRjOn9QRzWte" - }, - "source": [ - "\n", - "Let's start by installing the libraries and importing everything we need.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "LCVSvebNzWtf" - }, - "outputs": [], - "source": [ - "!pip install -Uq ray[tune] wandb" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "qBCk5SvQ1q_n" - }, - "outputs": [], - "source": [ - "import os\n", - "os.environ[\"WANDB_API_KEY\"] = \"654d20637b9232f03417d6ae6e620f3b7cb70125\" #@param {type:\"string\"}" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "dfHgJfVszWtf" - }, - "outputs": [], - "source": [ - "import random\n", - "import numpy as np\n", - "from filelock import FileLock\n", - "import tempfile\n", - "from ray import train, tune\n", - "from ray.air.integrations.wandb import WandbLoggerCallback, setup_wandb\n", - "from ray.tune.schedulers import AsyncHyperBandScheduler\n", - "from ray.train import Checkpoint\n", - "import torch\n", - "import torch.optim as optim\n", - "import wandb" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "Y1ysNX6pzWtf", - "outputId": "f6a36dad-1d03-4f13-d552-e00896d919db" - }, - "outputs": [], - "source": [ - "wandb.login()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "mYPSEuIgzWtg" - }, - "source": [ - "We'll make use of Ray's handy [`mnist_pytorch` example code](https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/mnist_pytorch.py)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "8b-t9a7pzWtg" - }, - "outputs": [], - "source": [ - "from ray.tune.examples.mnist_pytorch import ConvNet, get_data_loaders, test_func, train_func" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "BJ-zAJWpzWtg" - }, - "source": [ - "In order to make this experiment reproducible, we'll set the seeds for random number generators of various libraries used in this experiment." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "GbXqaFYizWtg" - }, - "outputs": [], - "source": [ - "torch.backends.cudnn.deterministic = True\n", - "random.seed(2022)\n", - "np.random.seed(2022)\n", - "torch.manual_seed(2022)\n", - "torch.cuda.manual_seed_all(2022)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "nAmSUMP4zWtg" - }, - "source": [ - "## ๐Ÿค Integrating W&B with Ray/Tune" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "OVmU1CRgzWtg" - }, - "source": [ - "Now, we define our training process, decorated with `@wandb_mixin` so we can call `wandb.log` to log our custom metric\n", - "(here, just the error rate; you might also [log media](https://docs.wandb.com/library/log#media), e.g. images from the validation set, captioned by the model predictions).\n", - "\n", - "When we execute our hyperparameter sweep below,\n", - "this function will be called with a `config`uration dictionary\n", - "that contains values for any hyperparameters.\n", - "For simplicity, we only have two hyperparameters here:\n", - "the learning rate and momentum value for accelerated SGD." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "nf2FKgaM21Em" - }, - "outputs": [], - "source": [ - "def train_mnist(config):\n", - " # Setup wandb\n", - " wandb = setup_wandb(config)\n", - " should_checkpoint = config.get(\"should_checkpoint\", False)\n", - " use_cuda = torch.cuda.is_available()\n", - " device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n", - " train_loader, test_loader = get_data_loaders()\n", - " model = ConvNet().to(device)\n", - "\n", - " optimizer = optim.SGD(\n", - " model.parameters(), lr=config[\"lr\"], momentum=config[\"momentum\"]\n", - " )\n", - " while True:\n", - " train_func(model, optimizer, train_loader, device)\n", - " acc = test_func(model, test_loader, device)\n", - " metrics = {\"mean_accuracy\": acc}\n", - "\n", - " # Report metrics (and possibly a checkpoint)\n", - " if should_checkpoint:\n", - " with tempfile.TemporaryDirectory() as tempdir:\n", - " torch.save(model.state_dict(), os.path.join(tempdir, \"model.pt\"))\n", - " train.report(metrics, checkpoint=Checkpoint.from_directory(tempdir))\n", - " else:\n", - " train.report(metrics)\n", - " # enables logging custom metrics using wandb.log()\n", - " error_rate = 100 * (1 - acc)\n", - " wandb.log({\"error_rate\": error_rate})" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "FjSalAQkzWth" - }, - "source": [ - "## ๐Ÿš€ Launching a Sweep with W&B and Ray/Tune" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "lBS6iyXAzWth" - }, - "source": [ - "We're now almost ready to call `tune.run` to launch our hyperparameter sweep!\n", - "We just need to do three things:\n", - "1. set up a `wandb.Run`,\n", - "2. give the `WandbLoggerCallback` to `tune.run` so we can capture the output of `tune.report`, and\n", - "3. set up our hyperparameter sweep.\n", - "\n", - "A `wandb.Run` is normally created by calling `wandb.init`.\n", - "`tune` will handle that for you, you just need to pass\n", - "the arguments as a dictionary\n", - "(see [our documentation](https://docs.wandb.com/library/init) for details on `wandb.init`).\n", - "At the bare minimum, you need to pass in a `project` name --\n", - "sort of like a `git` repo name, but for your ML projects.\n", - "\n", - "In addition to holding arguments for `wandb.init`,\n", - "that dictionary also has a few special keys, described in\n", - "[the documentation for the `WandbLoggerCallback`](https://docs.ray.io/en/master/tune/tutorials/tune-wandb.html)." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "E6HRghT9zWth" - }, - "source": [ - "We handle steps 2 and 3 when we invoke `tune.run`.\n", - "\n", - "Step 2 is handled by passing in the `WandbLoggerCallback` class in a list\n", - "to the `loggers` argument of `tune.run`.\n", - "\n", - "The setup of the hyperparameter sweep is handled by the\n", - "`config` argument of `tune.run`.\n", - "For the purposes of the integration,\n", - "the most important part is that this is where we pass in the `wandb_init`\n", - "dictionary.\n", - "\n", - "This is also where we configure the \"meat\" of the hyperparameter sweep:\n", - "what are the hyperparameters we're sweeping over,\n", - "and how do we choose their values.\n", - "\n", - "Here, we do a simple grid search, but\n", - "[Ray/Tune provides lots of sophisticated options](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "tw4eawRD0u8y", - "outputId": "e4ddf59b-5445-4d55-8f69-84a9fcb35024" - }, - "outputs": [], - "source": [ - "# for early stopping\n", - "sched = AsyncHyperBandScheduler()\n", - "\n", - "resources_per_trial = {\"gpu\": 1} # set this for GPUs\n", - "tuner = tune.Tuner(\n", - " tune.with_resources(train_mnist, resources=resources_per_trial),\n", - " tune_config=tune.TuneConfig(\n", - " metric=\"mean_accuracy\",\n", - " mode=\"max\",\n", - " scheduler=sched,\n", - " num_samples=50,\n", - " ),\n", - " run_config=train.RunConfig(\n", - " name=\"exp\",\n", - " stop={\n", - " \"mean_accuracy\": 0.98,\n", - " \"training_iteration\": 5,\n", - " },\n", - " callbacks=[WandbLoggerCallback(project=\"raytune-colab\")]\n", - " ),\n", - " param_space={\n", - " \"lr\": tune.loguniform(1e-4, 1e-2),\n", - " \"momentum\": tune.uniform(0.1, 0.9),\n", - " },\n", - ")\n", - "results = tuner.fit()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "1f8oI7tYzWth", - "outputId": "9fece67a-47e7-443a-ab8f-971f3d8dfd94" - }, - "outputs": [], - "source": [ - "print(\"Best config is:\", results.get_best_result().config)" - ] - } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "provenance": [], - "toc_visible": true - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Open\n", + "" + ] }, - "nbformat": 4, - "nbformat_minor": 0 + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\"Weights\n", + "\n", + "\n", + "\n", + "Use Weights & Biases for machine learning experiment tracking, dataset versioning, and project collaboration.\n", + "\n", + "
\n", + "\n", + "\"Weights\n", + "\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "# ๐ŸŒž Ray/Tune and ๐Ÿ‹๏ธโ€โ™€๏ธ Weights & Biases\n", + "\n", + "Both Weights and Biases and Ray/Tune are built for scale and handle millions of models every month for teams doing some of the most cutting-edge deep learning research.\n", + "\n", + "[W&B](https://wandb.com) is a toolkit with everything you need to track, reproduce, and gain insights from your models easily; [Ray/Tune](https://docs.ray.io/en/latest/tune/) provides a simple interface for scaling and running distributed experiments.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### ๐Ÿค They're a natural match! ๐Ÿค\n", + "\n", + "Here's just a few reasons why our community likes Ray/Tune โ€“\n", + "\n", + "* **Simple distributed execution**: Ray/Tune makes it easy to scale all the way from a single node on a laptop, through to multiple GPUs, and up to multiple nodes on multiple machines.\n", + "* **State-of-the-art algorithms**: Ray/Tune has tested implementations of a huge number of potent scheduling algorithms including\n", + "[Population-Based Training](https://docs.ray.io/en/latest/tune/tutorials/tune-advanced-tutorial.html),\n", + "[ASHA](https://docs.ray.io/en/master/tune/tutorials/tune-tutorial.html#early-stopping-with-asha),\n", + "and\n", + "[HyperBand](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#hyperband-tune-schedulers-hyperbandscheduler)\n", + "* **Method agnostic**: Ray/Tune works across deep learning frameworks (including PyTorch, Keras, Tensorflow, and PyTorchLightning) and with other ML methods like gradient-boosted trees (XGBoost, LightGBM)\n", + "* **Fault-tolerance**: Ray/Tune is built on top of Ray, providing tolerance for failed runs out of the box.\n", + "\n", + "This Colab demonstrates how this integration works for a simple grid search over two hyperparameters. If you've got any questions about the details,\n", + "check out\n", + "[our documentation](https://docs.wandb.com/library/integrations/ray-tune)\n", + "or the\n", + "[documentation for Ray/Tune](https://docs.ray.io/en/master/tune/api_docs/integration.html#weights-and-biases-tune-integration-wandb).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "W&B integrates with `ray.tune` by offering two lightweight standalone integrations:\n", + "\n", + "1. For simple cases, `WandbLoggerCallback` automatically logs metrics reported to Tune to W&B, along with the configuration of the experiment, using Tune's [`logger` interface](https://docs.ray.io/en/latest/tune/api_docs/logging.html).\n", + "2. The `@wandb_mixin` decorator gives you greater control over logging by letting you call `wandb.log` inside the decorated function, allowing you to [log custom metrics, plots, and other outputs, like media](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/wandb-log/Log_(Almost)_Anything_with_W%26B_Media.ipynb).\n", + "\n", + "These methods can be used together or independently.\n", + "\n", + "The example below demonstrates how they can be used together." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# ๐Ÿงน Running a hyperparameter sweep with W&B and Ray/Tune" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ๐Ÿ“ฅ Install, `import`, and set seeds" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Let's start by installing the libraries and importing everything we need.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!pip install -Uq ray[tune] wandb" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import random\n", + "import numpy as np\n", + "from filelock import FileLock\n", + "import tempfile\n", + "from ray import train, tune\n", + "from ray.air.integrations.wandb import WandbLoggerCallback, setup_wandb\n", + "from ray.tune.schedulers import AsyncHyperBandScheduler\n", + "from ray.train import Checkpoint\n", + "import torch\n", + "import torch.optim as optim\n", + "import wandb" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import wandb\n", + "wandb.login()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We'll make use of Ray's handy [`mnist_pytorch` example code](https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/mnist_pytorch.py)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from ray.tune.examples.mnist_pytorch import ConvNet, get_data_loaders, test_func, train_func" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In order to make this experiment reproducible, we'll set the seeds for random number generators of various libraries used in this experiment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "torch.backends.cudnn.deterministic = True\n", + "random.seed(2022)\n", + "np.random.seed(2022)\n", + "torch.manual_seed(2022)\n", + "torch.cuda.manual_seed_all(2022)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ๐Ÿค Integrating W&B with Ray/Tune" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we define our training process, decorated with `@wandb_mixin` so we can call `wandb.log` to log our custom metric\n", + "(here, just the error rate; you might also [log media](https://docs.wandb.com/library/log#media), e.g. images from the validation set, captioned by the model predictions).\n", + "\n", + "When we execute our hyperparameter sweep below,\n", + "this function will be called with a `config`uration dictionary\n", + "that contains values for any hyperparameters.\n", + "For simplicity, we only have two hyperparameters here:\n", + "the learning rate and momentum value for accelerated SGD." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def train_mnist(config):\n", + " # Setup wandb\n", + " wandb = setup_wandb(config)\n", + " should_checkpoint = config.get(\"should_checkpoint\", False)\n", + " use_cuda = torch.cuda.is_available()\n", + " device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n", + " train_loader, test_loader = get_data_loaders()\n", + " model = ConvNet().to(device)\n", + "\n", + " optimizer = optim.SGD(\n", + " model.parameters(), lr=config[\"lr\"], momentum=config[\"momentum\"]\n", + " )\n", + " while True:\n", + " train_func(model, optimizer, train_loader, device)\n", + " acc = test_func(model, test_loader, device)\n", + " metrics = {\"mean_accuracy\": acc}\n", + "\n", + " # Report metrics (and possibly a checkpoint)\n", + " if should_checkpoint:\n", + " with tempfile.TemporaryDirectory() as tempdir:\n", + " torch.save(model.state_dict(), os.path.join(tempdir, \"model.pt\"))\n", + " train.report(metrics, checkpoint=Checkpoint.from_directory(tempdir))\n", + " else:\n", + " train.report(metrics)\n", + " # enables logging custom metrics using wandb.log()\n", + " error_rate = 100 * (1 - acc)\n", + " wandb.log({\"error_rate\": error_rate})" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ๐Ÿš€ Launching a Sweep with W&B and Ray/Tune" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We're now almost ready to call `tune.run` to launch our hyperparameter sweep!\n", + "We just need to do three things:\n", + "1. set up a `wandb.Run`,\n", + "2. give the `WandbLoggerCallback` to `tune.run` so we can capture the output of `tune.report`, and\n", + "3. set up our hyperparameter sweep.\n", + "\n", + "A `wandb.Run` is normally created by calling `wandb.init`.\n", + "`tune` will handle that for you, you just need to pass\n", + "the arguments as a dictionary\n", + "(see [our documentation](https://docs.wandb.com/library/init) for details on `wandb.init`).\n", + "At the bare minimum, you need to pass in a `project` name --\n", + "sort of like a `git` repo name, but for your ML projects.\n", + "\n", + "In addition to holding arguments for `wandb.init`,\n", + "that dictionary also has a few special keys, described in\n", + "[the documentation for the `WandbLoggerCallback`](https://docs.ray.io/en/master/tune/tutorials/tune-wandb.html)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We handle steps 2 and 3 when we invoke `tune.run`.\n", + "\n", + "Step 2 is handled by passing in the `WandbLoggerCallback` class in a list\n", + "to the `loggers` argument of `tune.run`.\n", + "\n", + "The setup of the hyperparameter sweep is handled by the\n", + "`config` argument of `tune.run`.\n", + "For the purposes of the integration,\n", + "the most important part is that this is where we pass in the `wandb_init`\n", + "dictionary.\n", + "\n", + "This is also where we configure the \"meat\" of the hyperparameter sweep:\n", + "what are the hyperparameters we're sweeping over,\n", + "and how do we choose their values.\n", + "\n", + "Here, we do a simple grid search, but\n", + "[Ray/Tune provides lots of sophisticated options](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# for early stopping\n", + "sched = AsyncHyperBandScheduler()\n", + "\n", + "resources_per_trial = {\"gpu\": 1} # set this for GPUs\n", + "tuner = tune.Tuner(\n", + " tune.with_resources(train_mnist, resources=resources_per_trial),\n", + " tune_config=tune.TuneConfig(\n", + " metric=\"mean_accuracy\",\n", + " mode=\"max\",\n", + " scheduler=sched,\n", + " num_samples=50,\n", + " ),\n", + " run_config=train.RunConfig(\n", + " name=\"exp\",\n", + " stop={\n", + " \"mean_accuracy\": 0.98,\n", + " \"training_iteration\": 5,\n", + " },\n", + " callbacks=[WandbLoggerCallback(project=\"raytune-colab\")]\n", + " ),\n", + " param_space={\n", + " \"lr\": tune.loguniform(1e-4, 1e-2),\n", + " \"momentum\": tune.uniform(0.1, 0.9),\n", + " },\n", + ")\n", + "results = tuner.fit()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"Best config is:\", results.get_best_result().config)" + ] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "provenance": [], + "toc_visible": true + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } + }, + "nbformat": 4, + "nbformat_minor": 0 }