From 168572dba59011569a1b62ede0ce1dca5fc8bfcf Mon Sep 17 00:00:00 2001 From: David Ackerman Date: Sat, 10 Feb 2024 14:54:56 -0500 Subject: [PATCH] add comments and rearrange example --- .../finetune_liver_peroxisome.ipynb | 352 +++++++++++------- .../finetune_liver_peroxisome.md | 235 +++++++----- 2 files changed, 356 insertions(+), 231 deletions(-) diff --git a/examples/distance_task/finetune_liver_peroxisome.ipynb b/examples/distance_task/finetune_liver_peroxisome.ipynb index 9aadc8ec7..a75ddda42 100644 --- a/examples/distance_task/finetune_liver_peroxisome.ipynb +++ b/examples/distance_task/finetune_liver_peroxisome.ipynb @@ -5,14 +5,35 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Dacapo" + "# Dacapo\n", + "\n", + "DaCapo is a framework that allows for easy configuration and execution of established machine learning techniques on arbitrarily large volumes of multi-dimensional images.\n", + "\n", + "DaCapo has 4 major configurable components:\n", + "1. **dacapo.datasplits.DataSplit**\n", + "\n", + "2. **dacapo.tasks.Task**\n", + "\n", + "3. **dacapo.architectures.Architecture**\n", + "\n", + "4. **dacapo.trainers.Trainer**\n", + "\n", + "These are then combined in a single **dacapo.experiments.Run** that includes your starting point (whether you want to start training from scratch or continue off of a previously trained model) and stopping criterion (the number of iterations you want to train)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Imports" + "## Config Store\n", + "\n", + "To define where the data goes, create a dacapo.yaml configuration file. Here is a template:\n", + "```yaml \n", + "mongodbhost: mongodb://dbuser:dbpass@dburl:dbport/\n", + "mongodbname: dacapo\n", + "runs_base_dir: /path/to/my/data/storage\n", + "```\n", + "The `runs_base_dir` defines where your on-disk data will be stored. The `mongodbhost` and `mongodbname` define the mongodb host and database that will store your cloud data. If you want to store everything on disk, replace `mongodbhost` and `mongodbname` with a single type: files and everything will be saved to disk." ] }, { @@ -21,153 +42,37 @@ "metadata": {}, "outputs": [], "source": [ - "from pathlib import PosixPath\n", - "from dacapo.experiments.datasplits.datasets.arrays import (\n", - " BinarizeArrayConfig,\n", - " IntensitiesArrayConfig,\n", - " MissingAnnotationsMaskConfig,\n", - " ResampledArrayConfig,\n", - " ZarrArrayConfig,\n", - ")\n", - "from dacapo.experiments.tasks import DistanceTaskConfig\n", - "from dacapo.experiments.architectures import CNNectomeUNetConfig\n", - "from dacapo.experiments.trainers import GunpowderTrainerConfig\n", - "from dacapo.experiments.trainers.gp_augments import (\n", - " ElasticAugmentConfig,\n", - " GammaAugmentConfig,\n", - " IntensityAugmentConfig,\n", - " IntensityScaleShiftAugmentConfig,\n", - ")\n", - "from dacapo.experiments.datasplits import TrainValidateDataSplitConfig\n", - "from dacapo.experiments.datasplits.datasets import RawGTDatasetConfig\n", - "from dacapo.experiments.starts import StartConfig\n", - "from dacapo.experiments import RunConfig\n", - "from dacapo.store.create_store import create_config_store" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Config Store" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "metadata": {}, - "outputs": [], - "source": [ + "from dacapo.store.create_store import create_config_store\n", + "\n", "config_store = create_config_store()" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Task" - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "metadata": {}, - "outputs": [], - "source": [ - "task_config = DistanceTaskConfig(\n", - " name=\"example_distances_8nm_peroxisome\",\n", - " channels=[\"peroxisome\"],\n", - " clip_distance=80.0,\n", - " tol_distance=80.0,\n", - " scale_factor=160.0,\n", - " mask_distances=True,\n", - ")\n", - "config_store.store_task_config(task_config)" - ] - }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ - "## Architecture" + "## Datasplit\n", + "Where can you find your data? What format is it in? Does it need to be normalized? What data do you want to use for validation?" ] }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ - "architecture_config = CNNectomeUNetConfig(\n", - " name=\"example_attention-upsample-unet\",\n", - " input_shape=(216, 216, 216),\n", - " fmaps_out=72,\n", - " fmaps_in=1,\n", - " num_fmaps=12,\n", - " fmap_inc_factor=6,\n", - " downsample_factors=[(2, 2, 2), (3, 3, 3), (3, 3, 3)],\n", - " kernel_size_down=None,\n", - " kernel_size_up=None,\n", - " eval_shape_increase=(72, 72, 72),\n", - " upsample_factors=[(2, 2, 2)],\n", - " constant_upsample=True,\n", - " padding=\"valid\",\n", - ")\n", - "config_store.store_architecture_config(architecture_config)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Trainer" - ] - }, - { - "cell_type": "code", - "execution_count": 21, - "metadata": {}, - "outputs": [], - "source": [ - "trainer_config = GunpowderTrainerConfig(\n", - " name=\"example_default_one_label_finetuning\",\n", - " batch_size=2,\n", - " learning_rate=1e-05,\n", - " num_data_fetchers=20,\n", - " augments=[\n", - " ElasticAugmentConfig(\n", - " control_point_spacing=[100, 100, 100],\n", - " control_point_displacement_sigma=[10.0, 10.0, 10.0],\n", - " rotation_interval=(0.0, 1.5707963267948966),\n", - " subsample=8,\n", - " uniform_3d_rotation=True,\n", - " ),\n", - " IntensityAugmentConfig(scale=(0.5, 1.5), shift=(-0.2, 0.2), clip=True),\n", - " GammaAugmentConfig(gamma_range=(0.5, 1.5)),\n", - " IntensityScaleShiftAugmentConfig(scale=2.0, shift=-1.0),\n", - " ],\n", - " snapshot_interval=10000,\n", - " min_masked=0.05,\n", - " clip_raw=False,\n", + "from dacapo.experiments.datasplits.datasets.arrays import (\n", + " BinarizeArrayConfig,\n", + " IntensitiesArrayConfig,\n", + " MissingAnnotationsMaskConfig,\n", + " ResampledArrayConfig,\n", + " ZarrArrayConfig,\n", ")\n", - "config_store.store_trainer_config(trainer_config)" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Datasplit" - ] - }, - { - "cell_type": "code", - "execution_count": 22, - "metadata": {}, - "outputs": [], - "source": [ + "from dacapo.experiments.datasplits import TrainValidateDataSplitConfig\n", + "from dacapo.experiments.datasplits.datasets import RawGTDatasetConfig\n", + "from pathlib import PosixPath\n", + "\n", "datasplit_config = TrainValidateDataSplitConfig(\n", " name=\"example_jrc_mus-livers_peroxisome_8nm\",\n", " train_configs=[\n", @@ -353,30 +258,148 @@ "config_store.store_datasplit_config(datasplit_config)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Task\n", + "What do you want to learn? An instance segmentation? If so, how? Affinities,\n", + "Distance Transform, Foreground/Background, etc. Each of these tasks are commonly learned\n", + "and evaluated with specific loss functions and evaluation metrics. Some tasks may\n", + "also require specific non-linearities or output formats from your model." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "from dacapo.experiments.tasks import DistanceTaskConfig\n", + "\n", + "task_config = DistanceTaskConfig(\n", + " name=\"example_distances_8nm_peroxisome\",\n", + " channels=[\"peroxisome\"],\n", + " clip_distance=80.0,\n", + " tol_distance=80.0,\n", + " scale_factor=160.0,\n", + " mask_distances=True,\n", + ")\n", + "config_store.store_task_config(task_config)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Architecture\n", + "\n", + "The setup of the network you will train. Biomedical image to image translation often utilizes a UNet, but even after choosing a UNet you still need to provide some additional parameters. How much do you want to downsample? How many convolutional layers do you want?" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "from dacapo.experiments.architectures import CNNectomeUNetConfig\n", + "\n", + "architecture_config = CNNectomeUNetConfig(\n", + " name=\"example_attention-upsample-unet\",\n", + " input_shape=(216, 216, 216),\n", + " fmaps_out=72,\n", + " fmaps_in=1,\n", + " num_fmaps=12,\n", + " fmap_inc_factor=6,\n", + " downsample_factors=[(2, 2, 2), (3, 3, 3), (3, 3, 3)],\n", + " kernel_size_down=None,\n", + " kernel_size_up=None,\n", + " eval_shape_increase=(72, 72, 72),\n", + " upsample_factors=[(2, 2, 2)],\n", + " constant_upsample=True,\n", + " padding=\"valid\",\n", + ")\n", + "config_store.store_architecture_config(architecture_config)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Trainer\n", + "\n", + "How do you want to train? This config defines the training loop and how the other three components work together. What sort of augmentations to apply during training, what learning rate and optimizer to use, what batch size to train with." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "from dacapo.experiments.trainers import GunpowderTrainerConfig\n", + "from dacapo.experiments.trainers.gp_augments import (\n", + " ElasticAugmentConfig,\n", + " GammaAugmentConfig,\n", + " IntensityAugmentConfig,\n", + " IntensityScaleShiftAugmentConfig,\n", + ")\n", + "\n", + "trainer_config = GunpowderTrainerConfig(\n", + " name=\"example_default_one_label_finetuning\",\n", + " batch_size=2,\n", + " learning_rate=1e-05,\n", + " num_data_fetchers=20,\n", + " augments=[\n", + " ElasticAugmentConfig(\n", + " control_point_spacing=[100, 100, 100],\n", + " control_point_displacement_sigma=[10.0, 10.0, 10.0],\n", + " rotation_interval=(0.0, 1.5707963267948966),\n", + " subsample=8,\n", + " uniform_3d_rotation=True,\n", + " ),\n", + " IntensityAugmentConfig(scale=(0.5, 1.5), shift=(-0.2, 0.2), clip=True),\n", + " GammaAugmentConfig(gamma_range=(0.5, 1.5)),\n", + " IntensityScaleShiftAugmentConfig(scale=2.0, shift=-1.0),\n", + " ],\n", + " snapshot_interval=10000,\n", + " min_masked=0.05,\n", + " clip_raw=False,\n", + ")\n", + "config_store.store_trainer_config(trainer_config)" + ] + }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ - "## Run" + "## Run\n", + "Now that we have our components configured, we just need to combine them into a run and start training. We can have multiple repetitions of a single set of configs in order to increase our chances of finding an optimum." ] }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "example_finetuned_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning_example_jrc_mus-livers_peroxisome_8nm__0\n", - "example_finetuned_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning_example_jrc_mus-livers_peroxisome_8nm__1\n", - "example_finetuned_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning_example_jrc_mus-livers_peroxisome_8nm__2\n" + "example_finetuned_example_jrc_mus-livers_peroxisome_8nm_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning__0\n", + "example_finetuned_example_jrc_mus-livers_peroxisome_8nm_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning__1\n", + "example_finetuned_example_jrc_mus-livers_peroxisome_8nm_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning__2\n" ] } ], "source": [ + "from dacapo.experiments.starts import StartConfig\n", + "from dacapo.experiments import RunConfig\n", + "from dacapo.experiments.run import Run\n", + "\n", "start_config = StartConfig(\n", " \"setup04\",\n", " \"best\",\n", @@ -390,17 +413,17 @@ " [\n", " \"example\",\n", " \"scratch\" if start_config is None else \"finetuned\",\n", + " datasplit_config.name,\n", " task_config.name,\n", " architecture_config.name,\n", " trainer_config.name,\n", - " datasplit_config.name,\n", " ]\n", " )\n", " + f\"__{i}\",\n", + " datasplit_config=datasplit_config,\n", " task_config=task_config,\n", " architecture_config=architecture_config,\n", " trainer_config=trainer_config,\n", - " datasplit_config=datasplit_config,\n", " num_iterations=iterations,\n", " validation_interval=validation_interval,\n", " repetition=i,\n", @@ -410,6 +433,55 @@ " print(run_config.name)\n", " config_store.store_run_config(run_config)" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To train one of the runs, you can either do it by first creating a **Run** directly from the run config" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from dacapo.train import train_run\n", + "\n", + "run = Run(run_config)\n", + "train_run(run)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Or - since we already stored the configs - we can start the run via just the run name:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "train_run(run_config.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you want to start your run on some compute cluster, you might want to use the command line interface: dacapo train -r {run_config.name}. This makes it particularly convenient to run on compute nodes where you can specify specific compute requirements." + ] } ], "metadata": { diff --git a/examples/distance_task/finetune_liver_peroxisome.md b/examples/distance_task/finetune_liver_peroxisome.md index a20ee58c6..3c2f14e68 100644 --- a/examples/distance_task/finetune_liver_peroxisome.md +++ b/examples/distance_task/finetune_liver_peroxisome.md @@ -1,109 +1,51 @@ # Dacapo -## Imports +DaCapo is a framework that allows for easy configuration and execution of established machine learning techniques on arbitrarily large volumes of multi-dimensional images. +DaCapo has 4 major configurable components: +1. **dacapo.datasplits.DataSplit** -```python -from pathlib import PosixPath -from dacapo.experiments.datasplits.datasets.arrays import ( - BinarizeArrayConfig, - IntensitiesArrayConfig, - MissingAnnotationsMaskConfig, - ResampledArrayConfig, - ZarrArrayConfig, -) -from dacapo.experiments.tasks import DistanceTaskConfig -from dacapo.experiments.architectures import CNNectomeUNetConfig -from dacapo.experiments.trainers import GunpowderTrainerConfig -from dacapo.experiments.trainers.gp_augments import ( - ElasticAugmentConfig, - GammaAugmentConfig, - IntensityAugmentConfig, - IntensityScaleShiftAugmentConfig, -) -from dacapo.experiments.datasplits import TrainValidateDataSplitConfig -from dacapo.experiments.datasplits.datasets import RawGTDatasetConfig -from dacapo.experiments.starts import StartConfig -from dacapo.experiments import RunConfig -from dacapo.store.create_store import create_config_store -``` - -## Config Store +2. **dacapo.tasks.Task** +3. **dacapo.architectures.Architecture** -```python -config_store = create_config_store() -``` +4. **dacapo.trainers.Trainer** -## Task +These are then combined in a single **dacapo.experiments.Run** that includes your starting point (whether you want to start training from scratch or continue off of a previously trained model) and stopping criterion (the number of iterations you want to train). +## Config Store -```python -task_config = DistanceTaskConfig( - name="example_distances_8nm_peroxisome", - channels=["peroxisome"], - clip_distance=80.0, - tol_distance=80.0, - scale_factor=160.0, - mask_distances=True, -) -config_store.store_task_config(task_config) +To define where the data goes, create a dacapo.yaml configuration file. Here is a template: +```yaml +mongodbhost: mongodb://dbuser:dbpass@dburl:dbport/ +mongodbname: dacapo +runs_base_dir: /path/to/my/data/storage ``` - -## Architecture +The `runs_base_dir` defines where your on-disk data will be stored. The `mongodbhost` and `mongodbname` define the mongodb host and database that will store your cloud data. If you want to store everything on disk, replace `mongodbhost` and `mongodbname` with a single type: files and everything will be saved to disk. ```python -architecture_config = CNNectomeUNetConfig( - name="example_attention-upsample-unet", - input_shape=(216, 216, 216), - fmaps_out=72, - fmaps_in=1, - num_fmaps=12, - fmap_inc_factor=6, - downsample_factors=[(2, 2, 2), (3, 3, 3), (3, 3, 3)], - kernel_size_down=None, - kernel_size_up=None, - eval_shape_increase=(72, 72, 72), - upsample_factors=[(2, 2, 2)], - constant_upsample=True, - padding="valid", -) -config_store.store_architecture_config(architecture_config) -``` - -## Trainer - +from dacapo.store.create_store import create_config_store -```python -trainer_config = GunpowderTrainerConfig( - name="example_default_one_label_finetuning", - batch_size=2, - learning_rate=1e-05, - num_data_fetchers=20, - augments=[ - ElasticAugmentConfig( - control_point_spacing=[100, 100, 100], - control_point_displacement_sigma=[10.0, 10.0, 10.0], - rotation_interval=(0.0, 1.5707963267948966), - subsample=8, - uniform_3d_rotation=True, - ), - IntensityAugmentConfig(scale=(0.5, 1.5), shift=(-0.2, 0.2), clip=True), - GammaAugmentConfig(gamma_range=(0.5, 1.5)), - IntensityScaleShiftAugmentConfig(scale=2.0, shift=-1.0), - ], - snapshot_interval=10000, - min_masked=0.05, - clip_raw=False, -) -config_store.store_trainer_config(trainer_config) +config_store = create_config_store() ``` ## Datasplit +Where can you find your data? What format is it in? Does it need to be normalized? What data do you want to use for validation? ```python +from dacapo.experiments.datasplits.datasets.arrays import ( + BinarizeArrayConfig, + IntensitiesArrayConfig, + MissingAnnotationsMaskConfig, + ResampledArrayConfig, + ZarrArrayConfig, +) +from dacapo.experiments.datasplits import TrainValidateDataSplitConfig +from dacapo.experiments.datasplits.datasets import RawGTDatasetConfig +from pathlib import PosixPath + datasplit_config = TrainValidateDataSplitConfig( name="example_jrc_mus-livers_peroxisome_8nm", train_configs=[ @@ -289,10 +231,100 @@ datasplit_config = TrainValidateDataSplitConfig( config_store.store_datasplit_config(datasplit_config) ``` +## Task +What do you want to learn? An instance segmentation? If so, how? Affinities, +Distance Transform, Foreground/Background, etc. Each of these tasks are commonly learned +and evaluated with specific loss functions and evaluation metrics. Some tasks may +also require specific non-linearities or output formats from your model. + + +```python +from dacapo.experiments.tasks import DistanceTaskConfig + +task_config = DistanceTaskConfig( + name="example_distances_8nm_peroxisome", + channels=["peroxisome"], + clip_distance=80.0, + tol_distance=80.0, + scale_factor=160.0, + mask_distances=True, +) +config_store.store_task_config(task_config) +``` + +## Architecture + +The setup of the network you will train. Biomedical image to image translation often utilizes a UNet, but even after choosing a UNet you still need to provide some additional parameters. How much do you want to downsample? How many convolutional layers do you want? + + +```python +from dacapo.experiments.architectures import CNNectomeUNetConfig + +architecture_config = CNNectomeUNetConfig( + name="example_attention-upsample-unet", + input_shape=(216, 216, 216), + fmaps_out=72, + fmaps_in=1, + num_fmaps=12, + fmap_inc_factor=6, + downsample_factors=[(2, 2, 2), (3, 3, 3), (3, 3, 3)], + kernel_size_down=None, + kernel_size_up=None, + eval_shape_increase=(72, 72, 72), + upsample_factors=[(2, 2, 2)], + constant_upsample=True, + padding="valid", +) +config_store.store_architecture_config(architecture_config) +``` + +## Trainer + +How do you want to train? This config defines the training loop and how the other three components work together. What sort of augmentations to apply during training, what learning rate and optimizer to use, what batch size to train with. + + +```python +from dacapo.experiments.trainers import GunpowderTrainerConfig +from dacapo.experiments.trainers.gp_augments import ( + ElasticAugmentConfig, + GammaAugmentConfig, + IntensityAugmentConfig, + IntensityScaleShiftAugmentConfig, +) + +trainer_config = GunpowderTrainerConfig( + name="example_default_one_label_finetuning", + batch_size=2, + learning_rate=1e-05, + num_data_fetchers=20, + augments=[ + ElasticAugmentConfig( + control_point_spacing=[100, 100, 100], + control_point_displacement_sigma=[10.0, 10.0, 10.0], + rotation_interval=(0.0, 1.5707963267948966), + subsample=8, + uniform_3d_rotation=True, + ), + IntensityAugmentConfig(scale=(0.5, 1.5), shift=(-0.2, 0.2), clip=True), + GammaAugmentConfig(gamma_range=(0.5, 1.5)), + IntensityScaleShiftAugmentConfig(scale=2.0, shift=-1.0), + ], + snapshot_interval=10000, + min_masked=0.05, + clip_raw=False, +) +config_store.store_trainer_config(trainer_config) +``` + ## Run +Now that we have our components configured, we just need to combine them into a run and start training. We can have multiple repetitions of a single set of configs in order to increase our chances of finding an optimum. ```python +from dacapo.experiments.starts import StartConfig +from dacapo.experiments import RunConfig +from dacapo.experiments.run import Run + start_config = StartConfig( "setup04", "best", @@ -306,17 +338,17 @@ for i in range(repetitions): [ "example", "scratch" if start_config is None else "finetuned", + datasplit_config.name, task_config.name, architecture_config.name, trainer_config.name, - datasplit_config.name, ] ) + f"__{i}", + datasplit_config=datasplit_config, task_config=task_config, architecture_config=architecture_config, trainer_config=trainer_config, - datasplit_config=datasplit_config, num_iterations=iterations, validation_interval=validation_interval, repetition=i, @@ -327,7 +359,28 @@ for i in range(repetitions): config_store.store_run_config(run_config) ``` - example_finetuned_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning_example_jrc_mus-livers_peroxisome_8nm__0 - example_finetuned_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning_example_jrc_mus-livers_peroxisome_8nm__1 - example_finetuned_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning_example_jrc_mus-livers_peroxisome_8nm__2 + example_finetuned_example_jrc_mus-livers_peroxisome_8nm_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning__0 + example_finetuned_example_jrc_mus-livers_peroxisome_8nm_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning__1 + example_finetuned_example_jrc_mus-livers_peroxisome_8nm_example_distances_8nm_peroxisome_example_attention-upsample-unet_example_default_one_label_finetuning__2 + + +## Train + +To train one of the runs, you can either do it by first creating a **Run** directly from the run config + + +```python +from dacapo.train import train_run + +run = Run(run_config) +train_run(run) +``` + +Or - since we already stored the configs - we can start the run via just the run name: + + +```python +train_run(run_config.name) +``` +If you want to start your run on some compute cluster, you might want to use the command line interface: dacapo train -r {run_config.name}. This makes it particularly convenient to run on compute nodes where you can specify specific compute requirements.