• Our Lab
    • About
    • Research Themes
    • Gallery
    • Exhibitions
    • Workshops >
      • Workshop Info
      • FAQ
    • Intern Diaries
  • Projects
    • Flagship Projects
    • Summer Projects
  • Publications
  • Our Team
    • Professor Incharge
    • Alumni >
      • Batch 2014
      • Batch 2016
      • Batch 2017
      • Batch 2018
      • Batch 2019
      • Batch 2020
      • Batch 2021
      • Batch 2022
    • Core Coordinators
    • Junior Year Coordinators
  • Contact
  • Spin-offs
    • Makxenia
    • AidBots
  • Intranet
IvLabs
  • Our Lab
    • About
    • Research Themes
    • Gallery
    • Exhibitions
    • Workshops >
      • Workshop Info
      • FAQ
    • Intern Diaries
  • Projects
    • Flagship Projects
    • Summer Projects
  • Publications
  • Our Team
    • Professor Incharge
    • Alumni >
      • Batch 2014
      • Batch 2016
      • Batch 2017
      • Batch 2018
      • Batch 2019
      • Batch 2020
      • Batch 2021
      • Batch 2022
    • Core Coordinators
    • Junior Year Coordinators
  • Contact
  • Spin-offs
    • Makxenia
    • AidBots
  • Intranet

Image Denoising

Overview

  • Deep learning(DL) is an area of machine learning that deals with artificial neural networks, which are algorithms inspired by the structure and function of the brain.
  • ​Autoencoder(AE): An Autoencoder is a type of artificial neural network that has bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder). 
  • Denoising Autoencoder(DAE): is a modification of the autoencoder. Denoising autoencoders corrupt the input data, adding noise to the input image, and then try to reconstruct the original image from the noisy image.

​Abstract:

Picture
​
  • With the increase in digital images taken every day the demand for more accurate and pleasing images has also increased.
  • Image Denoising will help the above task as the name itself suggests we will remove noise from the noisy image to restore the original image. 
  • We at IvLabs gave a try at this task of Image Denoising with the help of Convolutional Denoising Autoencoder.
Approach:
  • The denoising autoencoder is implemented with PyTorch and is applied on the MNIST and Fashion MNIST datasets.
  • The encoder network consists of three convolutional layers while the decoder network has three trans convolutional layers.
  • Encoder downsamples the data and then decoder reconstructs the original data from the lower-dimensional representation.
  • Gaussian noise with a noise factor value of 0.5 is added to distort the images in the datasets.
  • Each layer has a ReLU activation function and the final layer of the decoder has a sigmoid activation function. 
  • MSE loss and adam optimization was used for updating the parameters.​
Picture

Hyperparameters:                                                       Result:

The results showed 0.0117 loss for MNIST dataset after 10 epochs. 
​Parameters
​Values
​Learning Rate ​
0.001
​Weight decay
0.00001
​Batch Size
64
Epoch Number
10
Optimizer
Adam
Loss
MSE Loss
Picture
 The results showed 0.0116 loss for Fashion MNIST dataset after 10 epochs.
​​Parameters
​​Values
​Learning Rate ​​
​0.001
​Weight decay
​0.00001
​​Batch Size
64
​​Epoch Number
10
Optimizer
Adam
Loss
MSE Loss
Picture

Concepts used:

Picture
Convolution in Image
Picture
Denoising Autoencoder

GitHub Repository

Tools and Libraries used:

Picture
Picture
Team members:
  • Anand
  • Khushi
  • Nikhil
  • Prajyot
  • Pushkar
  • Syed
Team Mentors:
  • Sibam
  • Pulkit
  • Vignesh
  • Atharva
  • Kalyani
  • Sushant
Powered by Create your own unique website with customizable templates.
  • Our Lab
    • About
    • Research Themes
    • Gallery
    • Exhibitions
    • Workshops >
      • Workshop Info
      • FAQ
    • Intern Diaries
  • Projects
    • Flagship Projects
    • Summer Projects
  • Publications
  • Our Team
    • Professor Incharge
    • Alumni >
      • Batch 2014
      • Batch 2016
      • Batch 2017
      • Batch 2018
      • Batch 2019
      • Batch 2020
      • Batch 2021
      • Batch 2022
    • Core Coordinators
    • Junior Year Coordinators
  • Contact
  • Spin-offs
    • Makxenia
    • AidBots
  • Intranet