From f17ba03eab6068eeeea851d6f17cdf3feafa85d5 Mon Sep 17 00:00:00 2001 From: Ben Salmon Date: Mon, 19 Aug 2024 11:32:22 +0100 Subject: [PATCH] checkpoint conclusions --- 01_CARE/care_solution.ipynb | 50 +++++++++++++++++++++++++++++++++++-- 1 file changed, 48 insertions(+), 2 deletions(-) diff --git a/01_CARE/care_solution.ipynb b/01_CARE/care_solution.ipynb index 6ce6809..8f7ab0c 100755 --- a/01_CARE/care_solution.ipynb +++ b/01_CARE/care_solution.ipynb @@ -514,9 +514,26 @@ "metadata": {}, "source": [ "

Checkpoint 1: Data

\n", + "\n", + "In this section, we prepared paired training data. \n", + "The steps were:\n", + "1) Loading the images.\n", + "2) Cropping them into patches.\n", + "3) Checking the patches visually.\n", + "4) Creating an instance of a pytorch dataset and dataloader.\n", + "\n", + "You'll see a similar preparation procedure followed for most deep learning vision tasks.\n", + "\n", + "Next, we'll use this data to train a denoising model.\n", "
\n", "\n", - "
\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "\n", "## Part 2: Training the model\n", "\n", @@ -773,9 +790,25 @@ "metadata": {}, "source": [ "

Checkpoint 2: Training

\n", + "\n", + "In this section, we created and trained a UNet for denoising.\n", + "We:\n", + "1) Instantiated the model with random weights.\n", + "2) Chose a loss function to compare the output image to the ground truth clean image.\n", + "3) Chose an optimizer to minimize that loss function.\n", + "4) Trained the model with this optimizer.\n", + "5) Examined the training and validation loss curves to see how well our model trained.\n", + "\n", + "Next, we'll load a test set of noisy images and see how well our model denoises them.\n", "
\n", "\n", - "
\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "\n", "## Part 3: Predicting on the test dataset\n" ] @@ -923,8 +956,21 @@ "metadata": {}, "source": [ "

Checkpoint 3: Predicting

\n", + "\n", + "In this section, we evaluated the performance of our denoiser.\n", + "We:\n", + "1) Created a CAREDataset and Dataloader for a prediction loop.\n", + "2) Ran a prediction loop on the test data.\n", + "3) Examined the outputs.\n", + "\n", + "This notebook has shown how matched pairs of noisy and clean images can train a UNet to denoise, but what if we don't have any clean images? In the next notebook, we'll try Noise2Void, a method for training a UNet to denoise with only noisy images.\n", "
" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] } ], "metadata": {