Learn Python

Learn Data Structure & Algorithm

Learn Numpy

Learn Pandas

Learn Matplotlib

Learn Seaborn

Learn Statistics

Learn Math

Learn MATLAB

Learn Machine learning

Learn Github

Learn OpenCV

Introduction

Setup

ANN

Working process ANN

Propagation

Bias parameter

Activation function

Loss function

Overfitting and Underfitting

Optimization function

Chain rule

Minima

Gradient problem

Weight initialization

Dropout

ANN Regression Exercise

ANN Classification Exercise

Hyper parameter tuning

CNN

CNN basics

Convolution

Padding

Pooling

Data argumentation

Flattening

Create Custom Dataset

Binary Classification Exercise

Multiclass Classification Exercise

Transfer learning

Transfer model Basic template

RNN

How RNN works

LSTM

Bidirectional RNN

Sequence to sequence

Attention model

Transformer model

Bag of words

Tokenization & Stop words

Stemming & Lemmatization

TF-IDF

N-Gram

Word embedding

Normalization

Pos tagging

Parser

semantic analysis

Regular expression

Learn MySQL

Learn MongoDB

Learn Web scraping

Learn Excel

Learn Power BI

Learn Tableau

Learn Docker

Learn Hadoop

Deep learning dropout

Why we use dropout?

If we have a big neural network then we will have a lot of bias and weight parameters in the neural network. If we have a huge amount of bias and weight parameter then there is a chance that our model can be overfitted. We have to know one thing dropout if we have a single layer neural network then our model can face an underfitting problem but When we have multiple neural networks our model never faces an underfitting problem. It always faces an overfitting problem there. Because when we try to create multiple neural networks that time every addition of too many weights and bias parameter happen. These weights and biases tries to fit the data perfectly. In this case, we will face a high variance problem and high variance means overfitting. So that's why we always face overfitting problems in multiple neural networks. To fix overfitting we use to dropout.

How dropout works?

It is a regularization technique. What we do is, we choose some neurons randomly and disable or ignore them during the training phase. I mean, these units are not considered during a particular forward or backward pass. To implement dropout we select a dropout ratio. The ratio will be between 0 to 1 means 0< dropout < 1.

For the first iteration some input nodes and some neurons of each hidden layer according to the ratio that we took, will be disabled. This means, it will not do any impact on the neural network in forward and back propagation. In the second iteration again randomly new input nodes or neurons will be selected and again those will be disabled in forward and back propagation.

So we can say that in dropout, each iteration we randomly select some input nodes and neurons from each hidden layer and disable those to avoid overfitting problems and we select nodes and neurons randomly according to our ratio.

Now there can be a question, then what happen to the testing data?
The answer is in testing data there we don't disable any input nodes or neurons. There all the neurons and input nodes are connected. To perform dropout in test data, we multiply each weight with the ratio that we have taken. Suppose we have taken the ratio is 0.6. It means with each weight, we will multiply 0.5. Like W1*0.6, W2*0.5 and so on.

Now there can another question, how do we select the p-value?
If our neural network has an overfitting problem then we should select a p-value greater than 0.5. But to get the exact value we should use hyper parameter tuning.

CodersAim is created for learning and training a self learner to become a professional from beginner. While using CodersAim, you agree to have read and accepted our terms of use, privacy policy, Contact Us

© Copyright All rights reserved www.CodersAim.com. Developed by CodersAim.