Monday, May 25, 2020

Autoencoders using keras

AUTOENCODERS


          In this session i am going to discuss about 
  1. Who introduced Autoencoders
  2. Why we need Autoencoders
  3. What is an AutoEncoder
  4. How Autoencoders are implemented
  5. Where we use Autoencoders  (Applications)
  6.  Drawbacks of Autoencoders.
            Autoencoders are introduced by Hinton and PDP Group to address the problem of "Backpropagation with out a teacher". Transferring high definition images through the internet is a slow process. 




           To avoid this problem there is a technique that compress image at source side and transfer to destination through internet and at destination side reconstruct the image.



     There are some traditional methods also there like PCA (Principal Component Analysis) Algorithm to compress and reconstruct the image. In PCA noise is high. 



      To reduce high noise autoencoders are introduced.  An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input.

Properties of an Autoencoder:
  1. Unsupervised : An autoencoder is a unsupervised Learning Algorithm, that means data without output lables.
  2. Data Specific:  Autoencoders are able to compress the data similar to which they have been trained.
  3. Lossy: Autoencoders are lossy because it cannot predict the image exactly same as input image.
Architecture of Autoencoders: 
        

          The simplest form of an autoencoder is a feedforward, non-recurrent neural network similar to single layer perceptrons that participate in multilayer perceptrons (MLP) – having an input layer, an output layer and one or more hidden layers connecting them – where the output layer has the same number of nodes (neurons) as the input layer, and with the purpose of reconstructing its inputs (minimizing the difference between the input and the output) instead of predicting the target value Y given inputs X. Therefore, autoencoders are unsupervised learning models (do not require labeled inputs to enable learning).
           
Generally, in autoencoder architecture there are 3 components

  1.  An encoder that maps into the code.
  2. Code acts as a bottleneck.
  3. The decoder that maps the code to a reconstruction of original data.
Encoder is responsible to  map input features  ‘ X ’ to latent space representation (Code) ‘ h ’.
                X  -->  h

Decoder is responsible to map coded representation  ‘ h ‘ to ‘ X’ ‘
                h  --> X’



Types of Autoencoders 


1. Regularized Autoencoders

-Sparse Autoencoders
                Sparse autoencoder may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at once. This sparsity constraint forces the model to respond to the unique statistical features of the input data used for training.

-Denoising  Autoencoders
                 Even though autoencoders takes a noise image as input, these are able to reconstruct the image.  
               
-Contractive Autoencoders

2.Variational Autoencoders

                 Like other autoencoders, variational autoencoders also consist of an encoder and a decoder. But here, the decoder is the generator model. Variational autoencoders also carry out the construction process from the latent code space. But in VAEs, the latent coding space is continuous.

Applications of Autoencoders:

-Dimensionality Reduction
-Information Retrieval
-Anomaly Detection
-Image Processing
-Drug discovery
-Population synthesis
-Machine Translation
-Image Coloring
-Denoising Image
-Feature Variation
-Watermark Removal

Limitations of Autoencoders: 

         We have seen how autoencoders can be used for image compression and reconstruction of images. But in reality, they are not very efficient in the process of compressing images. Also, they are only efficient when reconstructing images similar to what they have been trained on.

Implementation of undercomplete Autoencoder:


from keras.layers import Input, Dense  
from keras.models import Model

# this is the size of our encoded representations
encoding_dim = 32  #by taking 32 compression factor reduced to 784/32

# this is our input placeholder
input_img = Input(shape=(784,)) # we have 28*28 dimensional image so 28*28=784 pixel values                                                              for each image
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(input_img)# here we are defining hidden                                                                                                                     layer as dense layer and its                                                                                                                     previous layer  is input_img

# "decoded" is the lossy reconstruction of the     input
decoded = Dense(784, activation='sigmoid')(encoded) # here we are defining layer output layer                                                                                                 its previous layer is 'encoded'.

# this model maps an input to its reconstruction

autoencoder = Model(input_img, decoded) #Building a model for Autoencoder

# this model maps an input to its encoded representation
encoder = Model(input_img, encoded) #Building a model for encoder

# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))

# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]


# create the decoder model
decoder = Model(encoded_input, decoder_layer(encoded_input)) #Building a model for                                                                                                                                decoder

autoencoder.compile(optimizer = 'adadelta',loss= 'binary_crossentropy')#Compile Autoencoder

#GET DATA
from keras.datasets import mnist

import numpy as np

(x_train, _), (x_test, _) = mnist.load_data() #ignoring output lables 
x_train = x_train.astype('float32') / 255. # normalization with Decimal Scaling

x_test = x_test.astype('float32') / 255.

# reshaping from (no.of images  * 28 * 28) to (no. of images *784
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) 

x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

print (x_train.shape)

print (x_test.shape)

#train with x_train

autoencoder.fit(x_train, x_train,epochs=50,
                             batch_size=256,
                             shuffle=True,
                             validation_data=(x_test, x_test))

# encode and decode 

encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)



Result after plotting original images with decoded images using matplotlib first row belongs to actual images and second row belongs to predicted images:





Credits: 

Labels:

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home