diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 1/Assignment 1.docx b/7 SEMESTER/Image Processing/Assignment/Assignment 1/Assignment 1.docx
new file mode 100644
index 0000000..bdd2503
Binary files /dev/null and b/7 SEMESTER/Image Processing/Assignment/Assignment 1/Assignment 1.docx differ
diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 1/Assignment 1.pdf b/7 SEMESTER/Image Processing/Assignment/Assignment 1/Assignment 1.pdf
new file mode 100644
index 0000000..30ffc66
Binary files /dev/null and b/7 SEMESTER/Image Processing/Assignment/Assignment 1/Assignment 1.pdf differ
diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Questions.txt b/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Questions.txt
new file mode 100644
index 0000000..202f8fd
--- /dev/null
+++ b/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Questions.txt
@@ -0,0 +1,7 @@
+Assignment 2
+
+Q1) Describe the various image restoration techniques ?
+
+Q2) Explain spatial domain image enhancements techniques?
+
+Q3) Describe the filtering in the frequency domain?
\ No newline at end of file
diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Solutions.md b/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Solutions.md
new file mode 100644
index 0000000..8ffcb86
--- /dev/null
+++ b/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Solutions.md
@@ -0,0 +1,442 @@
+
+
Author: Madhurima Rawat
+
+
+
+
Assignment 2
+
+
+
+
Question 1: Describe the various image restoration techniques?
+
+
+
+
Solution: Image Restoration Techniques
+
+
+### Table of Contents
+
+1. [Introduction to Image Restoration](#1-introduction-to-image-restoration)
+2. [Types of Degradation in Images](#2-types-of-degradation-in-images)
+3. [Key Image Restoration Techniques](#3-key-image-restoration-techniques)
+ - 3.1. [Inverse Filtering](#31-inverse-filtering)
+ - 3.2. [Wiener Filtering](#32-wiener-filtering)
+ - 3.3. [Constrained Least Squares Filtering](#33-constrained-least-squares-filtering)
+ - 3.4. [Iterative Restoration Techniques](#34-iterative-restoration-techniques)
+ - 3.5. [Blind Deconvolution](#35-blind-deconvolution)
+4. [Flowchart of Image Restoration Techniques](#4-flowchart-of-image-restoration-techniques)
+5. [Detailed Description of Techniques](#5-detailed-description-of-techniques)
+6. [Conclusion](#6-conclusion)
+7. [Summary Table of Frequency Domain Filtering Techniques](#7-summary-table-of-frequency-domain-filtering-techniques)
+
+---
+
+### 1. [Introduction to Image Restoration](#1-introduction-to-image-restoration)
+
+Image restoration refers to the process of recovering a degraded image to its original state. Unlike enhancement, which aims to improve image aesthetics, restoration focuses on removing known degradations like noise, blur, and motion artifacts. Restoration techniques rely on mathematical models of degradation and attempt to reverse these effects.
+
+---
+
+### 2. [Types of Degradation in Images](#2-types-of-degradation-in-images)
+
+Images can be degraded due to various reasons like noise, motion blur, atmospheric disturbance, or sensor defects. Common degradations include:
+
+- **Noise** (Gaussian, Salt-and-Pepper, etc.)
+- **Blur** (caused by camera movement, lens imperfections, or object motion)
+- **Low resolution** (due to poor quality sensors or sampling)
+
+---
+
+### 3. [Key Image Restoration Techniques](#3-key-image-restoration-techniques)
+
+#### 3.1. [Inverse Filtering](#31-inverse-filtering)
+
+Inverse filtering attempts to reverse the degradation process by applying the inverse of the degradation function to the degraded image. It is simple but highly sensitive to noise.
+
+#### 3.2. [Wiener Filtering](#32-wiener-filtering)
+
+Wiener filtering is an optimal approach for image restoration in the presence of noise. It minimizes the mean square error between the restored and the original image by taking both degradation and noise into account.
+
+#### 3.3. [Constrained Least Squares Filtering](#33-constrained-least-squares-filtering)
+
+This technique adds a constraint to the restoration process, ensuring smoothness in the restored image. It balances noise suppression and the preservation of image details.
+
+#### 3.4. [Iterative Restoration Techniques](#34-iterative-restoration-techniques)
+
+These techniques gradually improve the image over several iterations, refining the estimate of the original image. Methods such as the Richardson-Lucy algorithm fall under this category.
+
+#### 3.5. [Blind Deconvolution](#35-blind-deconvolution)
+
+Blind deconvolution is used when the degradation function is not known. It simultaneously estimates the original image and the degradation function, making it highly effective but computationally intensive.
+
+---
+
+### 4. [Flowchart of Image Restoration Techniques](#4-flowchart-of-image-restoration-techniques)
+
+```plaintext
+┌──────────────────────────────────────────┐
+│ Start │
+├──────────────────────────────────────────┤
+│ Read Degraded Image │
+├──────────────────────────────────────────┤
+│ Identify Degradation Model │
+├─────────────┬─────────────┬──────────────┤
+│ │ │ │
+▼ ▼ ▼ ▼
+Inverse Wiener Constrained Iterative/Blind
+Filtering Filtering Least Squares Deconvolution
+│ │ │ │
+▼ ▼ ▼ ▼
+Restored Noise-Reduced Smooth Image Enhanced with Iterations
+Image Image with Details or Blind Estimation
+├──────────────────────────────────────────┤
+│ Display Restored Image │
+├──────────────────────────────────────────┤
+│ End │
+└──────────────────────────────────────────┘
+```
+
+---
+
+### 5. [Detailed Description of Techniques](#5-detailed-description-of-techniques)
+
+- **Inverse Filtering**: This is the simplest restoration technique. The degraded image is restored by applying the inverse of the degradation function H(u, v). However, it is highly sensitive to noise, making it impractical in many cases.
+
+ The relationship between the original image F(u, v) and the degraded image G(u, v) is:
+
+ G(u, v) = F(u, v) \* H(u, v)
+
+ To retrieve F(u, v), inverse filtering applies:
+
+ F(u, v) = G(u, v) / H(u, v)
+
+- **Wiener Filtering**: This technique accounts for both degradation and noise. It minimizes the mean square error between the restored and original images. The Wiener filter formula is:
+
+ H_Wiener(u, v) = H\*(u, v) / (|H(u, v)|^2 + (S_n(u, v) / S_f(u, v)))
+
+ Where:
+
+ - H\*(u, v) is the complex conjugate of the degradation function.
+ - |H(u, v)|^2 is the magnitude squared of the degradation function.
+ - S_n(u, v) is the power spectrum of the noise.
+ - S_f(u, v) is the power spectrum of the original image.
+
+- **Constrained Least Squares Filtering**: This technique involves minimizing a cost function that includes a regularization term. It ensures that the solution is not only close to the observed image but also smooth.
+
+ The cost function can be expressed as:
+
+ Minimize || H _ F_restored - G ||^2 + λ || L _ F_restored ||^2
+
+ Where:
+
+ - H is the degradation function.
+ - F_restored is the restored image.
+ - G is the degraded image.
+ - L is a regularization operator (often a derivative operator).
+ - λ is a regularization parameter balancing fidelity and smoothness.
+
+- **Iterative Restoration Techniques**: These methods restore the image gradually by refining the estimate at each iteration. The Richardson-Lucy algorithm is an example, commonly used in astronomical imaging.
+
+- **Blind Deconvolution**: Blind deconvolution doesn’t require prior knowledge of the degradation function. It estimates both the original image and the degradation function iteratively, making it a powerful but computationally demanding technique.
+
+---
+
+### 6. [Conclusion](#6-conclusion)
+
+Image restoration techniques aim to recover a degraded image based on mathematical models of the degradation. Each technique, from simple inverse filtering to complex blind deconvolution, has its specific applications and limitations. Understanding the type of degradation is key to selecting the most appropriate restoration method.
+
+
+
Question 2: Explain spatial domain image enhancement techniques?
+
+
+
+
Solution: Spatial Domain Image Enhancement
+
+
+### Table of Contents
+
+1. [Introduction to Spatial Domain Image Enhancement](#1-introduction-to-spatial-domain-image-enhancement)
+2. [Spatial Domain Techniques Overview](#2-spatial-domain-techniques-overview)
+3. [Key Techniques in Spatial Domain Image Enhancement](#3-key-techniques-in-spatial-domain-image-enhancement)
+ - 3.1. [Image Negatives](#31-image-negatives)
+ - 3.2. [Logarithmic Transformations](#32-logarithmic-transformations)
+ - 3.3. [Power-Law (Gamma) Transformations](#33-power-law-gamma-transformations)
+ - 3.4. [Histogram Equalization](#34-histogram-equalization)
+ - 3.5. [Spatial Filtering](#35-spatial-filtering)
+4. [Flowchart of Spatial Domain Techniques](#4-flowchart-of-spatial-domain-techniques)
+5. [Detailed Description of Techniques](#5-detailed-description-of-techniques)
+6. [Conclusion](#6-conclusion)
+
+---
+
+### 1. [Introduction to Spatial Domain Image Enhancement](#1-introduction-to-spatial-domain-image-enhancement)
+
+Spatial domain image enhancement refers to modifying the pixel values of an image directly. The enhancement aims to improve the visual quality of images by altering brightness, contrast, sharpness, etc. The spatial domain operates on the image’s individual pixels, manipulating them for different visual effects.
+
+---
+
+### 2. [Spatial Domain Techniques Overview](#2-spatial-domain-techniques-overview)
+
+The primary goal of spatial domain enhancement is to apply operations to pixels to achieve desirable visual results. It includes point processing, where pixel values are changed independently, and spatial filtering, which involves the neighboring pixel values.
+
+---
+
+### 3. [Key Techniques in Spatial Domain Image Enhancement](#3-key-techniques-in-spatial-domain-image-enhancement)
+
+#### 3.1. [Image Negatives](#31-image-negatives)
+
+This technique reverses the intensity levels of an image, producing a negative. It is mainly used in medical imaging to highlight subtle features.
+
+#### 3.2. [Logarithmic Transformations](#32-logarithmic-transformations)
+
+Log transformations are useful for enhancing images with large variations in pixel intensities. They expand the darker pixel values and compress brighter values.
+
+#### 3.3. [Power-Law (Gamma) Transformations](#33-power-law-gamma-transformations)
+
+This method adjusts the contrast of an image by applying a gamma correction. Varying the gamma value enhances different details of the image. A gamma value greater than 1 darkens the image, while a value less than 1 brightens it.
+
+#### 3.4. [Histogram Equalization](#34-histogram-equalization)
+
+This technique is widely used to improve the contrast of an image by redistributing the intensity values. It spreads out the most frequent intensity values, leading to a better contrast image.
+
+#### 3.5. [Spatial Filtering](#35-spatial-filtering)
+
+Spatial filtering involves the use of masks or filters that modify pixel values based on the surrounding pixels. Filters like sharpening and smoothing enhance edges or reduce noise, respectively.
+
+---
+
+### 4. [Flowchart of Spatial Domain Techniques](#4-flowchart-of-spatial-domain-techniques)
+
+```plaintext
+┌──────────────────────────────────────────┐
+│ Start │
+├──────────────────────────────────────────┤
+│ Read Input Image │
+├──────────────────────────────────────────┤
+│ Choose Enhancement Technique │
+├─────────────┬─────────────┬──────────────┤
+│ │ │ │
+▼ ▼ ▼ ▼
+Logarithmic Power-Law Histogram Spatial Filtering
+Transform Transform Equalization
+│ │ │ │
+▼ ▼ ▼ ▼
+Modified Adjusted Enhanced Enhanced with
+Pixels Brightness Contrast Filter
+├──────────────────────────────────────────┤
+│ Display Enhanced Image │
+├──────────────────────────────────────────┤
+│ End │
+└──────────────────────────────────────────┘
+```
+
+---
+
+### 5. [Detailed Description of Techniques](#5-detailed-description-of-techniques)
+
+- **Image Negatives**: The transformation is achieved by inverting the pixel values using the equation:
+
+ ```
+ s = L - 1 - r
+ ```
+
+ where `r` is the input pixel, `s` is the output pixel, and `L` is the number of intensity levels.
+
+- **Logarithmic Transformations**: Log transformations can be mathematically expressed as:
+
+ ```
+ s = c * log(1 + r)
+ ```
+
+ where `r` is the pixel value, and `c` is a constant. This is useful for enhancing details in darker regions.
+
+- **Power-Law (Gamma) Transformations**: The power-law transformation is described by:
+
+ ```
+ s = c * r^gamma
+ ```
+
+ where `c` and `gamma` are positive constants. This is used for varying the contrast of images based on different gamma values.
+
+- **Histogram Equalization**: This technique redistributes the intensity values of the image so that the histogram of the output image is more uniformly spread. It’s expressed as:
+
+ ```
+ s = T(r) = integral from 0 to r of p(r') dr'
+ ```
+
+ where `p(r')` is the probability density function of intensity levels in the image.
+
+- **Spatial Filtering**: This technique applies a convolutional mask to the image for enhancing specific image features like edges. For instance, a sharpening filter highlights edges and fine details in the image, while a smoothing filter reduces noise.
+
+---
+
+### 6. [Conclusion](#6-conclusion)
+
+Spatial domain techniques are powerful tools in image enhancement, offering methods to directly manipulate pixel values for improved visual effects. Whether the goal is contrast enhancement, brightness adjustment, or noise reduction, these techniques play a crucial role in image processing.
+
+---
+
+
+
Question 3: Describe the filtering in the frequency domain?
+
+
+
+
Solution: Frequency Domain Filtering
+
+
+### Table of Contents
+
+1. [Introduction to Frequency Domain Filtering](#1-introduction-to-frequency-domain-filtering)
+2. [Basic Concepts of Frequency Domain](#2-basic-concepts-of-frequency-domain)
+3. [Key Frequency Domain Filtering Techniques](#3-key-frequency-domain-filtering-techniques)
+ - 3.1. [Low-Pass Filtering](#31-low-pass-filtering)
+ - 3.2. [High-Pass Filtering](#32-high-pass-filtering)
+ - 3.3. [Band-Pass Filtering](#33-band-pass-filtering)
+ - 3.4. [Band-Stop Filtering](#34-band-stop-filtering)
+4. [Flowchart of Frequency Domain Filtering](#4-flowchart-of-frequency-domain-filtering)
+5. [Detailed Description of Techniques](#5-detailed-description-of-techniques)
+6. [Conclusion](#6-conclusion)
+7. [Summary Table of Frequency Domain Filtering Techniques](#7-summary-table-of-frequency-domain-filtering-techniques)
+
+---
+
+### 1. [Introduction to Frequency Domain Filtering](#1-introduction-to-frequency-domain-filtering)
+
+Filtering in the frequency domain involves transforming the image into its frequency components using techniques like the Fourier transform, applying a filter in the frequency domain, and then transforming it back into the spatial domain. This approach is particularly useful for tasks like noise reduction, edge detection, and image sharpening.
+
+---
+
+### 2. Basic Concepts of Frequency Domain
+
+In the frequency domain, an image is represented by sine and cosine components, which describe the intensity variation across the image. The Fourier transform converts the spatial domain into the frequency domain for easier manipulation of frequency components.
+
+The Fourier transform equation for an image is:
+
+F(u, v) = ∑ (x=0 to M-1) ∑ (y=0 to N-1) f(x, y) \* e^ (-2πj \_ (ux/M + vy/N))
+
+Where f(x, y) is the spatial domain image and F(u, v) is its frequency domain representation.
+
+---
+
+### 3. [Key Frequency Domain Filtering Techniques](#3-key-frequency-domain-filtering-techniques)
+
+#### 3.1. [Low-Pass Filtering](#31-low-pass-filtering)
+
+Low-pass filters allow low-frequency components (smooth areas) to pass through while blocking high-frequency components (edges and noise). This type of filtering is typically used for noise reduction and image smoothing.
+
+#### 3.2. [High-Pass Filtering](#32-high-pass-filtering)
+
+High-pass filters allow high-frequency components (edges and fine details) to pass while blocking low-frequency components. It is often used to enhance edges and details in an image.
+
+#### 3.3. [Band-Pass Filtering](#33-band-pass-filtering)
+
+Band-pass filters allow a specific range of frequency components to pass while blocking frequencies outside this range. This is useful when only a certain frequency band is of interest.
+
+#### 3.4. [Band-Stop Filtering](#34-band-stop-filtering)
+
+Band-stop filters block a specific range of frequencies while allowing others to pass. This is used when certain frequencies, such as noise at a particular frequency, need to be removed.
+
+---
+
+### 4. [Flowchart of Frequency Domain Filtering](#4-flowchart-of-frequency-domain-filtering)
+
+```plaintext
+┌──────────────────────────────────────────┐
+│ Start │
+├──────────────────────────────────────────┤
+│ Read Input Image │
+├──────────────────────────────────────────┤
+│ Apply Fourier Transform │
+├──────────────────────────────────────────┤
+│ Choose Filtering Technique │
+├─────────────┬─────────────┬──────────────┤
+│ │ │ │
+▼ ▼ ▼ ▼
+Low-Pass High-Pass Band-Pass Band-Stop
+Filter Filter Filter Filter
+│ │ │ │
+▼ ▼ ▼ ▼
+Filtered Enhanced Specific Specific
+Image Edges Band Frequencies
+├──────────────────────────────────────────┤
+│ Apply Inverse Fourier Transform │
+├──────────────────────────────────────────┤
+│ Display Filtered Image │
+├──────────────────────────────────────────┤
+│ End │
+└──────────────────────────────────────────┘
+```
+
+---
+
+### 5. [Detailed Description of Techniques](#5-detailed-description-of-techniques)
+
+- **Low-Pass Filtering**: Low-pass filters remove high-frequency components that represent noise and sharp edges. The filtered image appears smoother because only low-frequency signals are retained. The filter function in the frequency domain is usually a Gaussian or ideal filter.
+- **High-Pass Filtering**: High-pass filters remove low-frequency components, highlighting high-frequency details like edges. The result is a sharper image with enhanced edges and fine details. The filter function is often designed to attenuate low frequencies while amplifying high frequencies.
+- **Band-Pass Filtering**: A band-pass filter allows frequencies within a certain range to pass while blocking frequencies outside this range. This is especially useful when an image contains useful information at a particular frequency band.
+- **Band-Stop Filtering**: The opposite of band-pass filtering, band-stop filters block frequencies within a specific range while passing all others. This is helpful for removing specific types of noise without affecting the rest of the image.
+
+---
+
+### 6. [Conclusion](#6-conclusion)
+
+Filtering in the frequency domain is a powerful tool for image processing, allowing selective manipulation of frequency components. By applying different types of filters, such as low-pass, high-pass, band-pass, and band-stop, we can achieve various objectives like noise reduction, edge enhancement, and the removal of specific frequencies. Understanding the frequency content of an image helps in selecting the right filter for the desired outcome.
+
+---
+
+### 7. [Summary Table of Frequency Domain Filtering Techniques](#7-summary-table-of-frequency-domain-filtering-techniques)
+
+1. **Low-Pass Filter**
+
+ - **Example Algorithms**:
+ - **Gaussian Blur**: Uses a Gaussian function to create a low-pass filter that smooths the image by reducing high-frequency content.
+ - **Butterworth Low-Pass Filter**: Provides a smoother transition between passed and blocked frequencies compared to the ideal filter.
+ - **Effect**:
+ - Smooths the image by averaging pixel values, thereby reducing noise and minor variations.
+ - **Applications**:
+ - **Medical Imaging**: Enhances the visibility of anatomical structures by reducing noise.
+ - **Photography**: Softens images to achieve a particular aesthetic.
+ - **Remote Sensing**: Filters out high-frequency noise from satellite images.
+
+2. **High-Pass Filter**
+
+ - **Example Algorithms**:
+ - **Laplacian Filter**: Detects edges by calculating the second derivative of the image intensity.
+ - **Sobel Filter**: Emphasizes edges in specific directions by calculating the gradient.
+ - **High-Boost Filtering**: Enhances details by adding a scaled version of the high-pass filtered image to the original image.
+ - **Effect**:
+ - Highlights edges and fine details, making transitions between different regions more pronounced.
+ - **Applications**:
+ - **Feature Detection**: Identifies important features within an image for further processing.
+ - **Image Sharpening**: Enhances the clarity and definition of images.
+ - **Security Systems**: Improves the detection of objects and movements.
+
+3. **Band-Pass Filter**
+
+ - **Example Algorithms**:
+ - **Gabor Filters**: Combines Gaussian filtering with sinusoidal waveforms to capture specific frequency and orientation information.
+ - **Band-Pass Butterworth Filter**: Allows a specific range of frequencies to pass while attenuating others with a Butterworth response.
+ - **Effect**:
+ - Isolates and emphasizes particular frequency ranges, enabling the extraction of specific features.
+ - **Applications**:
+ - **Texture Analysis**: Identifies and analyzes texture patterns within images.
+ - **Pattern Recognition**: Facilitates the recognition of specific patterns based on their frequency characteristics.
+ - **Signal Processing**: Filters signals to retain desired frequency components for analysis.
+
+4. **Band-Stop Filter**
+ - **Example Algorithms**:
+ - **Notch Filters**: Specifically designed to eliminate narrow frequency bands, often used to remove power line interference.
+ - **Band-Stop Butterworth Filter**: Blocks a specific range of frequencies with a Butterworth response while allowing others to pass.
+ - **Effect**:
+ - Removes unwanted frequency components without significantly affecting the rest of the image.
+ - **Applications**:
+ - **Noise Reduction**: Eliminates specific types of noise, such as periodic interference, without degrading the overall image quality.
+ - **Audio Processing**: Removes specific frequency bands from audio signals to enhance sound quality.
+ - **Communication Systems**: Filters out unwanted frequency bands to improve signal clarity and reduce interference.
+
+| **Filtering Technique** | **Purpose** | **Frequency Components** | **Common Uses** | **Example Algorithms** | **Effect** | **Applications** |
+| ----------------------- | ----------------------------------------------------- | -------------------------------------- | ------------------------------------- | ---------------------------------------------------- | -------------------------------------- | -------------------------------------------------------- |
+| **Low-Pass Filter** | Allows low frequencies to pass, blocks high | Passes low-frequency (smooth areas) | Noise reduction, image smoothing | Gaussian Blur, Butterworth Low-Pass Filter | Smooths the image, reduces noise | Medical imaging, Photography, Remote sensing |
+| **High-Pass Filter** | Allows high frequencies to pass, blocks low | Passes high-frequency (edges, details) | Edge enhancement, detail enhancement | Laplacian Filter, Sobel Filter, High-Boost Filtering | Enhances edges and fine details | Feature detection, Image sharpening, Security systems |
+| **Band-Pass Filter** | Allows a specific range of frequencies to pass | Passes a specific frequency band | Isolating particular frequency ranges | Gabor Filters, Band-Pass Butterworth Filter | Isolates specific frequency components | Texture analysis, Pattern recognition, Signal processing |
+| **Band-Stop Filter** | Blocks a specific range of frequencies, passes others | Blocks a specific frequency band | Removing specific noise frequencies | Notch Filters, Band-Stop Butterworth Filter | Removes targeted frequency components | Noise reduction, Audio processing, Communication systems |
diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Solutions.pdf b/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Solutions.pdf
new file mode 100644
index 0000000..c855984
Binary files /dev/null and b/7 SEMESTER/Image Processing/Assignment/Assignment 2/Assignment 2 Solutions.pdf differ
diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Questions.txt b/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Questions.txt
new file mode 100644
index 0000000..4bdb138
--- /dev/null
+++ b/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Questions.txt
@@ -0,0 +1,9 @@
+Assignment 3
+
+Q1) Write short notes on:
+
+i)Thresholding segmentation method
+
+ii)Region based segmentation method
+
+Q2) Explain the various types of features extraction techniques?
\ No newline at end of file
diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Solutions.md b/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Solutions.md
new file mode 100644
index 0000000..8b81abb
--- /dev/null
+++ b/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Solutions.md
@@ -0,0 +1,890 @@
+
+
Author: Madhurima Rawat
+
+
+
+
Assignment 3
+
+
+
+
Question 1: Write short notes on:
+
+
+
+
(i): Thresholding Segmentation Method
+
+
+
+
Solution: Thresholding Segmentation Method in Image Processing
+
+
+Thresholding is a fundamental technique in **image segmentation** that is widely used to separate objects from the background. It partitions an image into distinct regions based on intensity levels by comparing pixel intensities to one or more threshold values, which creates a binary or multi-class segmented image.
+
+---
+
+### Table of Contents
+
+1. [Introduction to Thresholding](#introduction-to-thresholding)
+2. [Types of Thresholding](#types-of-thresholding)
+ - [Global Thresholding](#global-thresholding)
+ - [Adaptive Thresholding](#adaptive-thresholding)
+ - [Multi-Level Thresholding](#multi-level-thresholding)
+3. [Thresholding in Different Color Spaces (HSI)](#thresholding-in-different-color-spaces-hsi)
+4. [Mathematical Formulation](#mathematical-formulation)
+5. [Flowchart of the Thresholding Process](#flowchart-of-the-thresholding-process)
+6. [Real-Life Examples of Thresholding](#real-life-examples-of-thresholding)
+7. [Advantages and Disadvantages](#advantages-and-disadvantages)
+8. [Conclusion](#conclusion)
+9. [References](#references)
+
+---
+
+
+
+### 1. Introduction to Thresholding
+
+Thresholding is an essential image processing technique used to convert grayscale images into binary images. A binary image contains only two colors, typically black (0) and white (1), depending on whether pixel intensities are above or below a chosen threshold value.
+
+For example, given an intensity threshold \( T \), the pixel values are set as follows:
+
+```
+f(x, y) = 1, if I(x, y) > T
+ 0, if I(x, y) ≤ T
+```
+
+Where:
+
+- \( f(x, y) \) is the output segmented image.
+- \( I(x, y) \) is the intensity of the pixel at position \( (x, y) \).
+- \( T \) is the threshold value.
+
+---
+
+
+
+### 2. Types of Thresholding
+
+
+
+#### 2.1. Global Thresholding
+
+Global thresholding uses a single threshold value for the entire image. This method works well for images where the intensity of objects and the background are distinctly different.
+
+
+
+#### 2.2. Adaptive Thresholding
+
+In adaptive thresholding, the threshold value is calculated locally for different parts of the image. This method is ideal for images with uneven lighting or backgrounds that vary in intensity.
+
+- **Formula for Adaptive Thresholding**:
+
+```
+T(x, y) = Σ I(x, y) / N
+```
+
+Where:
+
+- \( T(x, y) \) is the local threshold at pixel \( (x, y) \).
+- \( N \) is the number of pixels in the local neighborhood.
+
+
+
+#### 2.3. Multi-Level Thresholding
+
+Multi-level thresholding uses multiple thresholds to segment an image into more than two classes. It is suitable when the image contains multiple objects with different intensity ranges.
+
+---
+
+
+
+### 3. Thresholding in Different Color Spaces (HSI)
+
+Thresholding can be extended to other color spaces like **HSI (Hue, Saturation, Intensity)**. In the HSI color space:
+
+- **Hue (H)** is used to differentiate objects based on color.
+- **Saturation (S)** separates vibrant from dull areas.
+- **Intensity (I)** is used for light and dark area separation, similar to grayscale thresholding.
+
+#### Example:
+
+Consider a traffic light detection system. Here, the **hue** component can be used to detect the colors of the lights (red, yellow, green). A threshold can be applied to isolate only the desired color and discard others. Similarly, intensity thresholding can be used to separate bright regions from dark backgrounds.
+
+---
+
+
+
+### 4. Mathematical Formulation
+
+#### Global Thresholding:
+
+Global thresholding computes a single threshold value for the entire image based on the average intensity:
+
+```
+T = (1 / (M * N)) * ΣΣ I(x, y)
+```
+
+Where:
+
+- \( T \) is the global threshold.
+- \( I(x, y) \) is the intensity of pixel \( (x, y) \).
+- \( M times N \) is the total number of pixels in the image.
+
+#### Adaptive Thresholding:
+
+For adaptive thresholding, the threshold is computed locally:
+
+```
+T(x, y) = ( Σ I(x, y) ) / N
+```
+
+Where:
+
+- \( T(x, y) \) is the local threshold for pixel \( (x, y) \).
+- \( N \) is the number of pixels in the local neighborhood.
+
+#### Multi-Level Thresholding:
+
+In multi-level thresholding, multiple thresholds \( T_1, T_2, \dots, T_n \) are defined, which divides the image into multiple regions:
+
+```
+Region 1: I(x, y) ≤ T_1
+Region 2: T_1 < I(x, y) ≤ T_2
+Region 3: T_2 < I(x, y) ≤ T_3
+...
+Region n: I(x, y) > T_n
+```
+
+---
+
+
+
+### 5. Flowchart of the Thresholding Process
+
+Below is a simplified flowchart illustrating the thresholding process:
+
+```
+ Start
+ |
+ Load the image (I)
+ |
+ Convert the image to grayscale
+ |
+ Select Threshold (T or Local T)
+ |
+ Compare pixel intensities with T
+ |
+ Assign pixel as foreground or background
+ |
+ Segmented Output (f(x, y))
+ |
+ End
+```
+
+---
+
+
+
+### 6. Real-Life Examples of Thresholding
+
+#### 1. **Document Scanning**
+
+In scanned documents, thresholding helps convert text (whether handwritten or printed) into binary images, simplifying further processing like Optical Character Recognition (OCR). By choosing a proper threshold, the text can be segmented from the paper background.
+
+#### 2. **Medical Imaging**
+
+In medical images like X-rays or MRI scans, thresholding helps in segmenting organs, tissues, or tumors. For instance, a radiologist may use thresholding to isolate bone structures or identify abnormal tissues based on their intensity.
+
+#### 3. **Biometrics**
+
+In fingerprint recognition systems, thresholding is used to isolate the ridges and valleys from the background, which is crucial for extracting features used in identification algorithms.
+
+#### 4. **Traffic Monitoring**
+
+In video surveillance, thresholding can detect vehicles, pedestrians, or traffic lights by isolating specific objects based on their intensity levels. For instance, during nighttime, headlights can be isolated through intensity thresholding.
+
+---
+
+
+
+### 7. Advantages and Disadvantages
+
+#### Advantages:
+
+- **Simple and Fast**: Thresholding is easy to implement and computationally efficient, making it suitable for real-time applications.
+- **Effective for High-Contrast Images**: It works well when the objects and background have clear intensity differences.
+
+#### Disadvantages:
+
+- **Sensitive to Lighting Variations**: Global thresholding is highly sensitive to uneven illumination. It may not work well if the lighting varies significantly across the image.
+- **Limited for Complex Images**: In images with overlapping intensity ranges or complex backgrounds, thresholding might not produce accurate segmentation results.
+
+---
+
+
+
+### 8. Conclusion
+
+Thresholding segmentation is a simple yet powerful tool in image processing that separates objects from the background based on pixel intensity. Global thresholding works best for uniform images with clear intensity differences, while adaptive and multi-level thresholding are better for images with complex lighting conditions or multiple objects. Thresholding techniques find widespread application in document scanning, medical imaging, traffic monitoring, and biometrics.
+
+---
+
+
+
+### 9. References
+
+- [Global and Adaptive Thresholding Techniques](https://www.sciencedirect.com)
+- [Thresholding in Image Processing]()
+- [Image Segmentation in Medical Applications](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC)
+
+---
+
+
+
(ii): Region-Based Segmentation Method
+
+
+
+
Solution: Region-Based Segmentation Method in Image Processing
+
+
+Region-based segmentation is a powerful technique in **image segmentation** that groups pixels into regions based on predefined criteria such as intensity, texture, or color homogeneity. This method focuses on finding connected regions of similar properties, making it particularly useful for images with distinct areas.
+
+---
+
+### Table of Contents
+
+1. [Introduction to Region-Based Segmentation](#1-introduction-to-region-based-segmentation)
+2. [Types of Region-Based Segmentation](#2-types-of-region-based-segmentation)
+ - [Region Growing](#21-region-growing)
+ - [Region Splitting and Merging](#22-region-splitting-and-merging)
+ - [Watershed Segmentation](#23-watershed-segmentation)
+3. [Segmentation in Different Color Spaces (HSI)](#3-segmentation-in-different-color-spaces-hsi)
+4. [Mathematical Formulation](#4-mathematical-formulation)
+5. [Flowchart of Region-Based Segmentation](#5-flowchart-of-region-based-segmentation)
+6. [Real-Life Examples of Region-Based Segmentation](#6-real-life-examples-of-region-based-segmentation)
+7. [Advantages and Disadvantages](#7-advantages-and-disadvantages)
+8. [Conclusion](#8-conclusion)
+9. [References](#9-references)
+
+---
+
+
+
+### 1. Introduction to Region-Based Segmentation
+
+Region-based segmentation is a technique that divides an image into regions by examining the homogeneity of the pixel values, such as intensity, color, or texture. The core principle is that neighboring pixels that share similar properties (e.g., intensity) are grouped together to form regions.
+
+- **Homogeneity**: Pixels within the same region should be homogeneous according to a specific criterion (intensity, texture, or color).
+- **Connectivity**: Region-based methods often rely on pixel connectivity to determine regions.
+
+The goal is to identify connected regions that represent meaningful parts of the image, such as objects, textures, or patterns.
+
+---
+
+
+
+### 2. Types of Region-Based Segmentation
+
+
+
+#### 2.1. Region Growing
+
+Region growing is an iterative process that starts from a seed point and adds neighboring pixels to the region as long as they meet specific similarity criteria.
+
+- **Algorithm**:
+ 1. Select a seed pixel.
+ 2. Compare neighboring pixels with the seed pixel.
+ 3. Add the neighbors that meet the homogeneity criterion.
+ 4. Continue until no more pixels can be added.
+
+##### Formula:
+
+The criteria for adding neighboring pixels can be expressed as:
+
+```
+|I(x, y) - I_seed| < T
+```
+
+### Explanation of Terms:
+
+- **\( I(x, y) \)**: This is the intensity of the pixel at coordinates \( (x, y) \) in the image. It represents the grayscale value or color intensity of that specific pixel.
+- **\( I_seed \)**: This is the intensity of the seed pixel, which is a chosen reference pixel for region growing or segmentation algorithms. The seed pixel serves as the starting point for identifying similar neighboring pixels.
+
+- **\( T \)**: This is a threshold that defines the allowable difference in intensity between the seed pixel and any other pixel. If the difference between a pixel’s intensity and the seed pixel's intensity is less than \( T \), the pixel is considered similar and can be grouped into the same region.
+
+### In Context:
+
+This formula is used in region-growing segmentation algorithms. Starting from a seed pixel, the algorithm examines neighboring pixels, comparing their intensities to the seed pixel. If the intensity difference is less than the threshold \( T \), the neighboring pixel is included in the growing region. This process continues, expanding the region until no more pixels meet the similarity criteria.
+
+
+
+#### 2.2. Region Splitting and Merging
+
+This method begins by treating the entire image as a single region and then recursively splits it into smaller regions based on a homogeneity criterion. Regions are then merged if they meet a specific similarity criterion.
+
+##### Algorithm:
+
+1. **Splitting**: Recursively divide the region until each sub-region meets the homogeneity condition.
+2. **Merging**: Merge adjacent regions if their combined region satisfies the homogeneity condition.
+
+##### Formula:
+
+If regions \(R_1\) and \(R_2\) satisfy the merging condition:
+
+```
+|I_mean(R1) - I_mean(R2)| < T
+```
+
+### Explanation of Terms:
+
+- **\( I_mean(R_1) \)**: This represents the mean intensity of region \( R_1 \). It is the average pixel intensity of all the pixels within the region \( R_1 \).
+- **\( I_mean(R_2) \)**: This represents the mean intensity of region \( R_2 \). Similar to \( R_1 \), this is the average pixel intensity of the pixels within the region \( R_2 \).
+
+- **\( T \)**: This is a predefined threshold value that determines whether two regions should be merged. It represents the allowable difference between the mean intensities of the two regions for them to be considered similar enough to be merged.
+
+### In Context:
+
+This formula is commonly used in region-based segmentation techniques, where two neighboring regions, \( R_1 \) and \( R_2 \), are merged if the difference between their mean intensities is less than the threshold \( T \). If the condition holds true, the two regions are combined into a single region, aiding in the segmentation process by grouping similar areas of the image.
+
+
+
+#### 2.3. Watershed Segmentation
+
+Watershed segmentation treats the image as a topographic surface, where the intensity values represent elevations. The algorithm floods basins (regions of local minima) and treats their boundaries as watershed lines.
+
+##### Algorithm:
+
+1. Treat the image as a 3D landscape where pixel intensity is the height.
+2. Flood the landscape from the lowest points.
+3. As flooding continues, regions grow and meet at watershed lines, which separate different regions.
+
+##### Mathematical Representation:
+
+Watershed is based on finding catchment basins using gradient magnitudes. Let \(G(x, y)\) be the gradient of the image \(I(x, y)\):
+
+```
+Watershed lines = ∂(Catchment basins of I(x, y))
+```
+
+### Explanation of Terms:
+
+- **Watershed lines**: These are the boundaries or edges that separate different regions in an image. They represent the dividing lines between distinct regions (or catchment basins) in the image, based on changes in intensity.
+- **∂ (boundary operator)**: This symbol denotes the boundary of a set. In this context, it refers to the boundary of the catchment basins in the image.
+
+- **Catchment basins**: These are regions in the image where pixels are grouped together based on their intensity values. Each basin represents an area where water (conceptually) would flow to a local minimum in a topographic surface derived from the pixel intensities.
+
+- **I(x, y)**: This represents the image function, where `x` and `y` are the spatial coordinates of the pixels. The function `I(x, y)` describes the intensity of the image at each point `(x, y)`.
+
+### In Context:
+
+The watershed segmentation algorithm treats an image like a topographic map where pixel intensities correspond to elevation. The watershed lines are the ridges that separate different catchment basins, helping to delineate regions of interest in the image for segmentation.
+
+---
+
+
+
+### 3. Segmentation in Different Color Spaces (HSI)
+
+Like thresholding, region-based segmentation can be applied in color spaces like **HSI (Hue, Saturation, Intensity)** to group pixels into regions based on specific attributes such as color or intensity.
+
+- **Hue**: Used for distinguishing objects based on color similarity.
+- **Saturation**: Helps separate regions of varying color vividness.
+- **Intensity**: Used to segment regions with varying brightness levels.
+
+#### Example:
+
+In satellite image processing, regions of different land types (water bodies, vegetation, urban areas) can be segmented based on hue and saturation values.
+
+---
+
+
+
+### 4. Mathematical Formulation
+
+Region-based segmentation relies on local similarity between pixels. A general condition for region growing can be expressed as:
+
+```
+S(x, y) = 1 if |I(x, y) - I_seed| < T
+ 0 otherwise
+```
+
+### Explanation of Terms:
+
+- **\( S(x, y) \)**: This is the segmented output at pixel \( (x, y) \). It is a binary value, where 1 indicates the pixel is part of the region, and 0 means the pixel is excluded from the region.
+- **\( I(x, y) \)**: This represents the intensity of the pixel at coordinates \( (x, y) \) in the image, which is compared to the seed pixel for similarity.
+
+- **\( I_seed \)**: The intensity of the seed pixel, which serves as the reference point for the region-growing algorithm.
+
+- **\( T \)**: The threshold that defines how much the intensity of pixel \( (x, y) \) can differ from the seed pixel \( I\_{\text{seed}} \) to still be considered similar and part of the same region.
+
+### In Context:
+
+This formulation is typically used in region-growing algorithms, where the decision to include a pixel in a segmented region is based on whether its intensity is similar to the seed pixel within a certain threshold. If the difference between \( I(x, y) \) and \( I_seed \) is less than the threshold \( T \), the pixel is included in the region, denoted by \( S(x, y) = 1 \). Otherwise, it is excluded, denoted by \( S(x, y) = 0 \).
+
+In **watershed segmentation**, instead of using intensities directly, the **gradient magnitude** \( G(x, y) \) is used. The watershed lines are drawn where there is a high gradient, indicating edges or boundaries between regions.
+
+---
+
+
+
+### 5. Flowchart of Region-Based Segmentation
+
+Below is a simplified flowchart illustrating the region-based segmentation process:
+
+```
+ Start
+ |
+ Load the image (I)
+ |
+ Initialize seed points or region
+ |
+ Apply growing or splitting/merging criteria
+ |
+ Group pixels into homogeneous regions
+ |
+ Assign pixels to regions or boundaries
+ |
+ Segmented Output (R(x, y))
+ |
+ End
+```
+
+---
+
+
+
+### 6. Real-Life Examples of Region-Based Segmentation
+
+1. **Medical Imaging**: Region-based segmentation is used to delineate tumors, organs, or tissues in MRI and CT scans. Region growing helps identify regions with similar tissue density or intensity, making it easier for radiologists to focus on specific areas.
+2. **Remote Sensing**: In satellite imagery, region-based segmentation helps classify different land types (water, forest, urban) based on spectral similarity, allowing for better land-use monitoring.
+3. **Object Tracking**: In video surveillance, region-based segmentation can be used to track moving objects (e.g., cars, pedestrians) by grouping pixels with similar motion characteristics.
+
+4. **Agricultural Applications**: In crop analysis, region growing helps in identifying distinct crop areas or diseased regions by segmenting images based on color or texture patterns.
+
+---
+
+
+
+### 7. Advantages and Disadvantages
+
+#### Advantages:
+
+- **Effective for Homogeneous Regions**: Works well when the objects in an image have consistent properties.
+- **Precise Boundaries**: Region-based methods can produce more accurate region boundaries compared to global methods like thresholding.
+
+#### Disadvantages:
+
+- **Sensitive to Noise**: Region-growing techniques can be sensitive to noise, which may cause over-segmentation.
+- **Seed Point Dependency**: Region-growing methods require good seed point selection for optimal results.
+
+---
+
+
+
+### 8. Conclusion
+
+Region-based segmentation is a flexible and powerful tool for grouping pixels into meaningful regions based on local similarity in properties like intensity, color, and texture. It offers high accuracy in homogenous areas and is widely used in fields such as medical imaging, remote sensing, and object tracking. While effective, the method requires careful consideration of noise and seed point initialization to achieve optimal performance.
+
+---
+
+
+
+### 9. References
+
+- [Region-Based Segmentation in Image Processing](https://en.wikipedia.org/wiki/Region-based_image_segmentation)
+- [Watershed Segmentation Techniques](https://www.sciencedirect.com)
+- [Region Growing Methods in Medical Imaging](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC)
+
+
+
Question 2: Explain the various types of features extraction techniques?
+
+
+
+
Solution: Feature Extraction Techniques in Image Processing
+
+
+Feature extraction is a crucial step in image processing, where meaningful information or features are identified from raw data for further analysis or pattern recognition. These features can include edges, textures, shapes, and other properties that help in tasks like image segmentation, classification, object detection, and more.
+
+1. [Introduction to Feature Extraction](#1-introduction-to-feature-extraction)
+2. [Types of Feature Extraction Techniques](#2-types-of-feature-extraction-techniques)
+ - [Texture-Based Techniques](#21-texture-based-techniques)
+ - [Edge-Based Techniques](#22-edge-based-techniques)
+ - [Shape-Based Techniques](#23-shape-based-techniques)
+ - [Color-Based Techniques](#24-color-based-techniques)
+ - [Keypoint-Based Techniques](#25-keypoint-based-techniques)
+3. [Feature Extraction in Different Color Spaces (HSI)](#3-feature-extraction-in-different-color-spaces-hsi)
+4. [Mathematical Formulations](#4-mathematical-formulations)
+5. [Flowchart of the Feature Extraction Process](#5-flowchart-of-the-feature-extraction-process)
+6. [Real-Life Applications of Feature Extraction](#6-real-life-applications-of-feature-extraction)
+7. [Advantages and Disadvantages](#7-advantages-and-disadvantages)
+8. [Conclusion](#8-conclusion)
+9. [References](#9-references)
+
+---
+
+
+
+### 1. Introduction to Feature Extraction
+
+Feature extraction is the process of transforming raw data into a set of meaningful characteristics or patterns that capture relevant information about the data. In image processing, features are used to represent specific patterns or details such as edges, textures, and shapes. These features are critical for tasks like object detection, recognition, and image classification.
+
+---
+
+
+
+### 2. Types of Feature Extraction Techniques
+
+There are several methods used to extract features from an image, each suited for different types of image analysis tasks. These methods include texture-based, edge-based, shape-based, color-based, and keypoint-based techniques.
+
+
+
+#### 2.1 Texture-Based Techniques
+
+Texture features describe the surface properties of an object or region, such as smoothness, roughness, and granularity. Common texture extraction techniques include:
+
+- **Gray Level Co-Occurrence Matrix (GLCM)**: This technique calculates how frequently pairs of pixel values occur in a given spatial relationship and uses that information to analyze texture.
+
+```
+P(i, j | d, θ) = (Number of (i, j)
+occurrences separated by distance d at angle θ) / N
+```
+
+### Explanation of Terms:
+
+- **\( P(i, j | d, \theta) \)**: This represents the **probability** of occurrence of pixel pairs with intensity levels \( i \) and \( j \), given a distance \( d \) and an angle \( \theta \).
+- **\( d \)**: The **spatial distance** between the two pixels in the pair. This specifies how far apart the pixel pairs are in the image.
+
+- **\( \theta \)**: The **direction** or **angle** at which the pixel pairs are separated. Common angles include 0° (horizontal), 90° (vertical), 45°, and 135°.
+
+- **\( N \)**: The **total number of pixel pairs** being considered in the image for the given distance \( d \) and angle \( \theta \).
+
+### In Context:
+
+This formula is often used in texture analysis, specifically in **Gray Level Co-occurrence Matrices (GLCM)**. The GLCM measures how often pairs of pixel values (i.e., intensities \( i \) and \( j \)) occur at a specific spatial relationship, defined by distance \( d \) and angle \( \theta \). The result gives insight into the texture of the image by analyzing these pairwise pixel occurrences, capturing patterns like coarseness, contrast, and homogeneity.
+
+- **Local Binary Patterns (LBP)**: Captures texture by comparing each pixel to its neighboring pixels and encoding the result as a binary number.
+
+### **Formula**:
+
+```
+LBP(x, y) = ∑(p=0 to P-1) 2^p * s(I_p - I_c)
+```
+
+### **Explanation**:
+
+- **LBP(x, y)**: This represents the Local Binary Pattern value at the pixel located at coordinates `(x, y)` in the image. It is a texture descriptor used for image analysis.
+
+- **P**: This is the number of neighboring pixels surrounding the central pixel. The most common choices for P are 8 (for a 3x3 neighborhood) or 16 (for a larger neighborhood).
+
+- **I_p**: This denotes the intensity value of the neighboring pixel at position `p`. These are the pixels that surround the central pixel in the defined neighborhood.
+
+- **I_c**: This is the intensity value of the central pixel located at `(x, y)`. It serves as a reference point for comparing the intensities of the neighboring pixels.
+
+- **s(x)**: This is a thresholding function defined as:
+
+ ```
+ s(x) = 1 if x >= 0
+ s(x) = 0 otherwise
+ ```
+
+ This function checks if the difference between the intensity of the neighboring pixel (`I_p`) and the central pixel (`I_c`) is non-negative. If it is, `s(x)` returns 1; otherwise, it returns 0.
+
+- **The summation**: The expression `∑(p=0 to P-1)` sums over all neighboring pixels. For each neighbor, it computes `2^p * s(I_p - I_c)`, where `p` represents the index of the neighboring pixel. This results in a binary pattern, where each pixel contributes to the final LBP value based on whether its intensity is greater than or less than the intensity of the central pixel.
+
+The final LBP value captures the local texture information by encoding the intensity relationships between the central pixel and its neighbors, providing a powerful tool for texture classification and image analysis.
+
+
+
+#### 2.2 Edge-Based Techniques
+
+Edge-based techniques focus on detecting significant changes in intensity, usually corresponding to object boundaries.
+
+- **Canny Edge Detection**: A multi-stage algorithm used to detect edges by applying Gaussian smoothing followed by gradient computation.
+
+```
+G(x, y) = sqrt((I_x(x, y))^2 + (I_y(x, y))^2)
+```
+
+### Explanation of Terms:
+
+- **\( G(x, y) \)**: This is the **gradient magnitude** at the pixel location \((x, y)\). It represents the intensity of the gradient at that pixel, which indicates the rate of change of pixel intensity.
+
+- **\( I_x(x, y) \)**: The **gradient** of the image in the **x-direction** (horizontal), at the pixel location \((x, y)\). It measures how the pixel intensity changes in the horizontal direction.
+
+- **\( I_y(x, y) \)**: The **gradient** of the image in the **y-direction** (vertical), at the pixel location \((x, y)\). It measures how the pixel intensity changes in the vertical direction.
+
+### In Context:
+
+This formula computes the **gradient magnitude** at each pixel in an image. The gradient magnitude is a measure of how rapidly the image intensity changes at that pixel, which is important for edge detection. The larger the gradient, the more likely the pixel is part of an edge in the image. By combining the gradients in both the x and y directions, this formula helps detect the strength and direction of edges in the image.
+
+- **Sobel Operator**: A convolution-based method that computes edge intensity in both horizontal and vertical directions using Sobel kernels.
+
+### **Formula**:
+
+```
+I_x = [-1, 0, 1] * I(x, y)
+ [-2, 0, 2]
+ [-1, 0, 1]
+
+I_y = [-1, -2, -1] * I(x, y)
+ [0, 0, 0]
+ [1, 2, 1]
+```
+
+### **Explanation**:
+
+- `I_x` and `I_y` represent the gradients of the image in the x and y directions, respectively.
+- The matrices are convolution kernels (Sobel operators) used to calculate the gradients:
+
+ - **Sobel Operator for X Direction (`I_x`)**:
+
+ ```
+ [-1, 0, 1]
+ [-2, 0, 2]
+ [-1, 0, 1]
+ ```
+
+ - This kernel detects horizontal edges by emphasizing the difference in pixel intensity values along the horizontal direction (from left to right). It produces a large response for regions with high intensity changes horizontally.
+
+ - **Sobel Operator for Y Direction (`I_y`)**:
+ ```
+ [-1, -2, -1]
+ [0, 0, 0]
+ [1, 2, 1]
+ ```
+ - This kernel detects vertical edges by emphasizing the difference in pixel intensity values along the vertical direction (from top to bottom). It produces a large response for regions with high intensity changes vertically.
+
+- `I(x, y)` represents the intensity of the pixel at coordinates `(x, y)` in the image.
+
+- The convolution operation `*` between the Sobel kernel and the image `I(x, y)` calculates the gradient magnitude in the specified direction (either horizontal or vertical) at each pixel location, highlighting areas of rapid intensity change, which correspond to edges in the image.
+
+
+
+#### 2.3 Shape-Based Techniques
+
+Shape-based techniques focus on the geometry of objects in the image, such as edges, contours, and regions.
+
+- **Hough Transform**: Used to detect shapes, especially lines and circles, by transforming points in the image into a parameter space.
+
+```
+ρ = x * cos(θ) + y * sin(θ)
+```
+
+### Explanation of Terms:
+
+- **\(\rho\)**: The **perpendicular distance** from the origin to the line in the image. It represents how far the line is from the origin in polar coordinates.
+
+- **\(x\)** and **\(y\)**: The **coordinates** of a point on the line in Cartesian (xy) coordinates.
+
+- **\(\theta\)**: The **angle** between the line and the positive x-axis. It defines the orientation of the line with respect to the x-axis.
+
+### In Context:
+
+This formula is used in the **Hough Transform** for line detection in images. It expresses the equation of a line in **polar coordinates** \((\rho, \theta)\) instead of the typical Cartesian form \(y = mx + b\). By transforming the problem into polar coordinates, it becomes easier to detect lines with different orientations and distances from the origin.
+
+- **Contour-Based Features**: Uses edge detection to identify object contours, which are then analyzed for features like area, perimeter, and moments.
+
+
+
+#### 2.4 Color-Based Techniques
+
+Color features are derived from the pixel values in color images. They are often used in conjunction with other features for tasks like object detection and segmentation.
+
+- **Color Histograms**: Measure the distribution of colors in an image and are often used for image retrieval tasks.
+
+```
+H(i) = (number of pixels with color i) / (total number of pixels)
+```
+
+### Explanation of Terms:
+
+- **\(H(i)\)**: The **histogram value** for color \(i\). This value represents the proportion of pixels in the image that have the specific color intensity or category \(i\).
+
+- **\(\number of pixels with color i\)**: The **count of pixels** in the image that have the specific color or intensity value \(i\).
+
+- **Total number of pixels**: The **total count of pixels** in the entire image. This serves as the denominator to normalize the histogram value, making it a proportion.
+
+### In Context:
+
+This formula is used to compute the **color histogram** of an image, which is a representation of the distribution of colors within that image. Histograms are essential in various image processing tasks, including image enhancement, segmentation, and object detection, as they provide insights into the color distribution and can help in analyzing the overall color composition of the image.
+
+- **Color Moments**: Statistical moments that describe the distribution of color in an image using mean, variance, and skewness.
+
+
+
+#### 2.5 Keypoint-Based Techniques
+
+Keypoints are interest points in the image that are invariant to transformations such as scaling, rotation, and translation.
+
+- **Scale-Invariant Feature Transform (SIFT)**: Detects keypoints and computes descriptors based on the orientation and scale of image regions.
+
+- **Speeded-Up Robust Features (SURF)**: Similar to SIFT, but optimized for speed by using integral images and approximations.
+
+Here are the formulas for Keypoint-Based Techniques presented in plaintext:
+
+### 1. Scale-Invariant Feature Transform (SIFT)
+
+Keypoint descriptor formula:
+
+```
+D(x, y) = ∑(u=-k to k) ∑(v=-k to k) I(x + u, y + v) * G(u, v, σ)
+```
+
+Where:
+
+- `D(x, y)` is the descriptor at the keypoint located at (x, y).
+- `I(x, y)` is the intensity of the image at the pixel location (x, y).
+- `G(u, v, σ)` is a Gaussian kernel with standard deviation σ used to weight the pixel contributions.
+- `k` defines the size of the window for the descriptor.
+
+### 2. Speeded-Up Robust Features (SURF)
+
+Hessian matrix formula for keypoint detection:
+
+```
+H(x, y) = det(H(x, y)) = (L_xx * L_yy) - (L_xy)^2
+```
+
+Where:
+
+- `H(x, y)` is the Hessian matrix at the pixel location (x, y).
+- `L_xx`, `L_yy`, and `L_xy` are the second-order derivatives of the image at that location.
+
+These formulas represent the mathematical foundations of the SIFT and SURF techniques for feature extraction.
+
+---
+
+
+
+### 3. Feature Extraction in Different Color Spaces (HSI)
+
+Feature extraction techniques can be applied in different color spaces, such as RGB, HSI (Hue, Saturation, Intensity), and YCbCr. In the **HSI color space**, features may be extracted from individual channels like hue, saturation, or intensity to capture specific characteristics of the image.
+
+- **Hue**: Used for color-based segmentation or detection.
+- **Saturation**: Helps differentiate between vivid and dull areas in the image.
+- **Intensity**: Similar to grayscale, intensity-based features like edges and textures can be extracted.
+
+---
+
+
+
+### 4. Mathematical Formulations
+
+Most feature extraction methods rely on mathematical models to compute specific properties of the image. Here are a few common formulations:
+
+### 1. Texture Extraction (GLCM)
+
+**Formula**:
+
+```
+P(i, j | d, θ) = #(i, j occurrences separated by distance d at angle θ) / N
+```
+
+**Explanation**:
+
+- `P(i, j | d, θ)` represents the probability of occurrence of pixel pairs with intensity levels `i` and `j`.
+- `d` is the spatial distance between the pixels.
+- `θ` is the direction of the pixel pair (the angle).
+- `#(i, j occurrences)` is the count of pixel pairs with intensities `i` and `j` that are separated by distance `d` and at angle `θ`.
+- `N` is the total number of pixel pairs considered in the image.
+
+---
+
+### 2. Edge Detection (Canny)
+
+**Formula**:
+
+```
+G(x, y) = √((I_x(x, y))² + (I_y(x, y))²)
+```
+
+**Explanation**:
+
+- `G(x, y)` is the gradient magnitude at pixel `(x, y)`.
+- `I_x(x, y)` is the gradient of the image in the x-direction at the pixel `(x, y)`.
+- `I_y(x, y)` is the gradient of the image in the y-direction at the pixel `(x, y)`.
+- The formula computes the overall gradient magnitude by combining the gradients in both directions, highlighting areas with significant intensity changes that indicate edges.
+
+---
+
+### 3. Shape Detection (Hough Transform)
+
+**Formula**:
+
+```
+ρ = x cos θ + y sin θ
+```
+
+**Explanation**:
+
+- `ρ` is the distance from the origin (0,0) to the closest point on the line.
+- `x` and `y` are the coordinates of the points in the image.
+- `θ` is the angle between the line and the x-axis.
+- This formula represents the relationship between the Cartesian coordinates of a point and the polar representation of a line. It is used in the Hough Transform to detect lines in an image by converting points to a parameter space defined by `ρ` and `θ`.
+
+---
+
+
+
+### 5. Flowchart of the Feature Extraction Process
+
+```
+ Start
+ |
+ Load the image (I)
+ |
+ Choose a feature extraction method
+ |
+ Apply the chosen method (e.g., edges, textures)
+ |
+ Extract features (edges, corners, keypoints)
+ |
+ Output feature descriptors or maps
+ |
+ End
+```
+
+---
+
+
+
+### 6. Real-Life Applications of Feature Extraction
+
+1. **Facial Recognition**: Keypoints and texture features are used to recognize faces by extracting landmarks such as eyes, nose, and mouth.
+2. **Medical Imaging**: Edge detection helps in segmenting organs or detecting tumors in X-rays and MRIs.
+3. **Autonomous Vehicles**: Object detection relies on extracting features like edges, shapes, and textures to detect pedestrians, vehicles, and obstacles.
+
+4. **Content-Based Image Retrieval (CBIR)**: Color histograms and texture features are used to search and retrieve similar images from large databases.
+
+---
+
+
+
+### 7. Advantages and Disadvantages
+
+#### Advantages:
+
+- **Wide Applicability**: Feature extraction is useful in a wide range of tasks, from object detection to image classification.
+- **Invariance to Transformations**: Techniques like SIFT and SURF are robust to scaling, rotation, and lighting changes.
+
+#### Disadvantages:
+
+- **Computationally Intensive**: Some methods, like SIFT, can be slow and require a lot of computation.
+- **Sensitive to Noise**: Simple methods like edge detection may be affected by noise or low-quality images.
+
+---
+
+
+
+### 8. Conclusion
+
+Feature extraction plays a vital role in image processing, enabling the detection and classification of objects based on meaningful characteristics. By selecting the appropriate feature extraction technique, images can be effectively analyzed for a wide variety of tasks, from medical diagnostics to facial recognition and beyond.
+
+---
+
+
+
+### 9. References
+
+- [Feature Extraction Techniques in Image Processing](https://www.sciencedirect.com/science/article/pii/S0031320317304445)
+- [A Survey on Feature Extraction Methods](https://www.mdpi.com/2076-3417/9/18/3685)
+- [Image Processing in Python](https://scikit-image.org/)
diff --git a/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Solutions.pdf b/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Solutions.pdf
new file mode 100644
index 0000000..b13d4c0
Binary files /dev/null and b/7 SEMESTER/Image Processing/Assignment/Assignment 3/Assignment 3 Solutions.pdf differ
diff --git a/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/DIP Image Segmentation.pdf b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/DIP Image Segmentation.pdf
new file mode 100644
index 0000000..b7943e5
Binary files /dev/null and b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/DIP Image Segmentation.pdf differ
diff --git a/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation Detection of Discontinuities.pdf b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation Detection of Discontinuities.pdf
new file mode 100644
index 0000000..ef274e4
Binary files /dev/null and b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation Detection of Discontinuities.pdf differ
diff --git a/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation Presentation.pptx b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation Presentation.pptx
new file mode 100644
index 0000000..b806bba
Binary files /dev/null and b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation Presentation.pptx differ
diff --git a/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation.ppt b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation.ppt
new file mode 100644
index 0000000..b4ad5d2
Binary files /dev/null and b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Image Segmentation.ppt differ
diff --git a/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Stathaki Image Segmentation.ppt b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Stathaki Image Segmentation.ppt
new file mode 100644
index 0000000..5b8f063
Binary files /dev/null and b/7 SEMESTER/Image Processing/Extra Resources/Notes/Unit 3/Stathaki Image Segmentation.ppt differ
diff --git a/7 SEMESTER/Image Processing/Extra Resources/Research Papers/Graph-based_Fusion_Modeling_and_Explanation_for_Di.pdf b/7 SEMESTER/Image Processing/Extra Resources/Research Papers/Graph-based_Fusion_Modeling_and_Explanation_for_Di.pdf
new file mode 100644
index 0000000..440cc13
Binary files /dev/null and b/7 SEMESTER/Image Processing/Extra Resources/Research Papers/Graph-based_Fusion_Modeling_and_Explanation_for_Di.pdf differ