Skip to content

Commit

Permalink
presentation formating
Browse files Browse the repository at this point in the history
  • Loading branch information
bellonet committed Sep 23, 2024
1 parent 7aba51e commit b7b86a9
Show file tree
Hide file tree
Showing 11 changed files with 162 additions and 183 deletions.
103 changes: 39 additions & 64 deletions content/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ In this workshop, we will explore fundamental concepts and practical techniques

<h4>Spatial alignment of two or more images.</h4>

- In science, it's an essential step for comparing or integrating data
- It's an essential step for comparing or integrating data in many scientific fields.


{{< notes >}}
Expand Down Expand Up @@ -89,8 +89,7 @@ Introducing registration methods that combine both matching and transformation i
![](img/correlation_r0_s1.png)

{{< notes >}}
[https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/correlation_example.ipynb
](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/correlation_example.ipynb
[Examples were generated using: example_notebooks/correlation_example.ipynb](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/correlation_example.ipynb
)
{{</ notes >}}

Expand All @@ -113,8 +112,7 @@ Entropy is maximized when there is maximum uncertainty or randomness in the pixe
Meaning that an image with a single pixel intensity value will have minimum entropy, and an image with a uniform distribution of pixel intensities will have maximum entropy.
{{</ notes >}}

- [https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/mutual_information.ipynb
](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/mutual_information.ipynb
- [Mutual Information implementation notebook: example_notebooks/mutual_information.ipynb](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/mutual_information.ipynb
)

---
Expand Down Expand Up @@ -147,24 +145,17 @@ An example of applying a feature-based registration pipeline to align two images

{{< horizontal >}}

1. **Detecting Similarities**: identifying corresponding features
- **Feature Detection**
- Detect keypoints and their descriptors (e.g. using SIFT)
- **Feature Matching**
- Match features between images
1. **Detecting Similarities**:
- **Feature Detection:** Detect keypoints and their descriptors (e.g. using SIFT)
- **Feature Matching:** Match features between images to select keypoints to use.
2. **Estimating and Applying Transformations**: one image is transformed in space to match the other
- **Transformation Estimation**
- Compute transformation matrix (e.g. affine) using matched keypoints
- **Warping**
- Apply transformation to align images
- **Transformation Estimation:** Compute transformation matrix (e.g. affine) using matched keypoints.
- **Warping:** Apply transformation to align images.

<img src="img/sift_route.png" alt="sift keypoints and matches"/>

{{</ horizontal >}}

- [https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/sift_example.ipynb
](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/sift_example.ipynb
)
- [SIFT based registration notebook: example_notebooks/sift_example.ipynb](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/sift_example.ipynb)

{{< notes >}}
SIFT can be robust and thus can be used for multimodal registration.
Expand All @@ -185,12 +176,11 @@ Deap learning based approaches can perform well on those tasks with minimal trai
1. **Predefined Feature Detection**: e.g. pose estimation.
- **Manual selection of features**
- **Annotation of training data**
- **Model selection, training, and prediction**
2. **Estimating and Applying Transformations**: All images are transformed in space to match a template image.
- **Transformation Estimation**
- Compute transformation matrix (e.g. affine) using matched keypoints
- **Warping**
- Apply transformation to align images
- **Deep learning landmark detection**
- Model selection
- training
- prediction of landmark locations on all images of the dataset
2. **Estimating and Applying Transformations** using the detected landmarks.

<img src="img/wing_landmarks.png" alt="wing landmarks model"/><img src="img/wing_registration.png" alt="wing landmarks model"/>

Expand All @@ -206,6 +196,9 @@ DeepLabCut is a tracking tool that is open-source and offer great models that ca

![](img/transformations.png)

[Link to: example_notebooks/transformation_examples.ipynb](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/transformation_examples.ipynb
)

{{< notes >}}

The type of transformation should be chosen based on the expected deformations in the images.
Expand All @@ -218,43 +211,21 @@ Rigid transformation requires 2 points, affine 3 points, perspective 4 points, i

---

## Image Transformation Generation

![](img/transformation_nb.png)

- [https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/transformation_examples.ipynb
](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/transformation_examples.ipynb
)

---

## Image Interpolation - Common Types

{{< notes >}}
When you transform an image to a new space, you need to estimate the pixel values at the new locations.
Interpolation is used to estimate pixel values at non-integer coordinates.
{{</ notes >}}

![](img/interpolation_functions.png)
Image by [Cmglee](https://commons.wikimedia.org/wiki/User:Cmglee), license: CC BY-SA 4.0
{{< horizontal >}}
![](img/interpolation_functions.png)

![](img/interpolation_weights.png)
{{</ horizontal >}}

---

## Image Interpolation Example

![](img/interpolation_rotation.png)
![](img/interpolation_shearing.png)

---

## Image Interpolation Notebook

![](img/interpolation_nb.png)

- [https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/interpolation.ipynb
](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/interpolation.ipynb
Image by [Cmglee](https://commons.wikimedia.org/wiki/User:Cmglee), license: CC BY-SA 4.0
[Link to interpolation weights and examples notebook: example_notebooks/interpolation.ipynb](https://github.com/bellonet/image-registration-workshop/blob/main/example_notebooks/interpolation.ipynb
)

{{< notes >}}
Expand All @@ -265,19 +236,24 @@ Image by [Cmglee](https://commons.wikimedia.org/wiki/User:Cmglee), license: CC B

---

## Image Interpolation Example

![](img/interpolation_rotation.png)
![](img/interpolation_shearing.png)

---

## Challenges & Considerations

{{< horizontal >}}

- **Method Selection**:
- Match image type (e.g., multimodal) to appropriate method
- **Transformation Type**:
- Fit transformation to deformation (e.g., rigid vs. non-rigid)
- **Preprocessing**:
- Denoising, intensity correction, rescaling, applying filters
- In hard cases - Use extrinsic information (e.g., physical landmarks)
- **Performance vs. Complexity**:
- Trade-off between accuracy and speed
- **Transformation Type**:
- Fit transformation to deformation (e.g., rigid vs. non-rigid)
- **Method Selection**:
- Match image type (e.g., multimodal) to appropriate method

![](img/edge_detection.png)

Expand All @@ -289,7 +265,7 @@ Image by [Cmglee](https://commons.wikimedia.org/wiki/User:Cmglee), license: CC B

{{< horizontal >}}

## Image Registration Guidelines
## Image Registration Guideline

![Image Registration Guideline](img/flowchart.png)

Expand Down Expand Up @@ -331,15 +307,14 @@ We will work through practical examples using Fiji/ImageJ and Python.

## Thank You!

#### Thanks for participating. Please feel free to reach out with any questions.
##### Thanks for participating. Please feel free to reach out with any questions.

![](img/people/hi-support-staff.jpg)
![](img/people/hi-support-staff.png)

Resources I used to create this presentation:
- Deborah Schmidt - [https://ida-mdc.gitlab.io/workshops/3d-data-visualization/](https://ida-mdc.gitlab.io/workshops/3d-data-visualization/)
- Erik Meijering - [https://www.youtube.com/watch?v=ecu8kreTwYM](https://www.youtube.com/watch?v=ecu8kreTwYM)
Contact: **ella.bahry at mdc-berlin.de**

Presentation template: Deborah Schmidt - [https://ida-mdc.gitlab.io/workshops/3d-data-visualization/](https://ida-mdc.gitlab.io/workshops/3d-data-visualization/)


Contact: **ella.bahry at mdc-berlin.de**


21 changes: 13 additions & 8 deletions static/img/flowcahrt.txt
Original file line number Diff line number Diff line change
@@ -1,24 +1,29 @@
digraph ImageRegistrationGuideline {
rankdir=TB;
node [shape=rectangle, style=rounded, fontsize=10, fontname="Helvetica"];
nodesep=0.1;
ranksep=0.1;
node [shape=rectangle, style=rounded, fontsize=12, fontname="Helvetica"];

size="9,9.7"; // Width=8 inches, Height=10 inches
ratio=compress; // Adjust the ratio to fit the height

// Start Node
start [label="Start", shape=ellipse, style=filled, fillcolor=lightgreen];

// Preprocessing Decision
preprocessing_check [label="Is preprocessing needed?", shape=diamond, style=filled, fillcolor=lightblue];
preprocessing_check [label="Is preprocessing needed?", shape=sqaure, style=filled, fillcolor=lightblue];

// Preprocessing Steps
preprocessing_steps [label="Correct Intensity, Resize, or Segment.."];

// Deep Learning Decision
deep_learning_check [label="Do you have the necessary expertise,\ncomputational resources,\nand perhaps training data for Deep Learning?", shape=diamond, style=filled, fillcolor=lightblue];
deep_learning_check [label="Do you have the necessary expertise,\ncomputational resources,\nand perhaps training data for Deep Learning?", shape=sqaure, style=filled, fillcolor=lightblue];

// Deep Learning Registration
deep_learning_registration [label="Apply Deep Learning-Based Registration\n(e.g., VoxelMorph, DeepReg)", shape=ellipse, style=filled, fillcolor=lightgrey];

// Step 1: Are the images very similar?
similarity_check [label="Are the images very similar?\n(Minimal transformation)", shape=diamond, style=filled, fillcolor=lightblue];
similarity_check [label="Are the images very similar?\n(Minimal transformation)", shape=sqaure, style=filled, fillcolor=lightblue];

// Paths from similarity_check
intensity_based [label="Use Intensity-Based Registration\n(e.g., Cross-Correlation)"];
Expand All @@ -27,10 +32,10 @@ digraph ImageRegistrationGuideline {
retake_images [label="Retake Images with Extrinsic Landmarks\nor Improved Features", shape=ellipse, style=filled, fillcolor=lightgrey];

// Transformation Type Decision
transformation_type [label="Select Transformation Type", shape=diamond, style=filled, fillcolor=lightblue];
transformation_type [label="Select Transformation Type", shape=sqaure, style=filled, fillcolor=lightblue];

// Local Deformations Decision
local_deformations [label="Are local deformations present?", shape=diamond, style=filled, fillcolor=lightblue];
local_deformations [label="Are local deformations present?", shape=sqaure, style=filled, fillcolor=lightblue];

// Transformation Methods
rigid_transformation [label="Calculate and Apply Rigid, Affine, or Perspective Transformation"];
Expand Down Expand Up @@ -95,6 +100,6 @@ digraph ImageRegistrationGuideline {
retake_images -> final_step [style=invis]; // Invisible edge to maintain graph structure

// Styling for Clarity
edge [fontsize=9, fontname="Helvetica"];
node [fontsize=10, fontname="Helvetica"];
edge [fontsize=12, fontname="Helvetica"];
node [fontsize=12, fontname="Helvetica"];
}
Binary file modified static/img/flowchart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit b7b86a9

Please sign in to comment.