Learn Python
Learn Data Structure & Algorithm
Learn Numpy
Learn Pandas
Learn Matplotlib
Learn Seaborn
Learn Statistics
Learn Math
Learn MATLAB
Learn Machine learning
Learn Github
Learn OpenCV
Introduction
Setup
ANN
Working process ANN
Propagation
Bias parameter
Activation function
Loss function
Overfitting and Underfitting
Optimization function
Chain rule
Minima
Gradient problem
Weight initialization
Dropout
ANN Regression Exercise
ANN Classification Exercise
Hyper parameter tuning
CNN
CNN basics
Convolution
Padding
Pooling
Data argumentation
Flattening
Create Custom Dataset
Binary Classification Exercise
Multiclass Classification Exercise
Transfer learning
Transfer model Basic template
RNN
How RNN works
LSTM
Bidirectional RNN
Sequence to sequence
Attention model
Transformer model
Bag of words
Tokenization & Stop words
Stemming & Lemmatization
TF-IDF
N-Gram
Word embedding
Normalization
Pos tagging
Parser
semantic analysis
Regular expression
Learn MySQL
Learn MongoDB
Learn Web scraping
Learn Excel
Learn Power BI
Learn Tableau
Learn Docker
Learn Hadoop
Before understanding CNN we have to understand how the brain works. In the backside of the brain, we have a part named Cerebral Cortex and inside this, we have the visual cortex. This visual cortex is responsible for doing various image and video-related work. When we see an image, it passes through our sensory organ means the eye then it passes through various neurons then it reaches the cerebral cortex and then it goes to the visual cortex. In the visual cortex, we have multiple layers. All layers present in the visual cortex plays a very important role. Each layer is responsible for different work for image detection. Suppose one layer is responsible for finding the edges of our image then this information will pass to the second layer and some different work will also be done there, like is the object is moving or not etc, then it will pass to another layer. It means that each layer is performing some work for extracting some information from the image or video frames. We try to do a similar thing to in CNN.
Images basically mention as pixel-like 360p, 720p , 1080p, 4x4 pixel matrix, 6x6 pixel matrix, etc. Suppose we
have a 4x4 pixel matrix image. The pixels value should be ranging between 0-255. It means the pixels value can
be 3, 0, 245, 155, etc but must be between 0-255. For color images we use RGB. Here R=red, G=green, B=blue. So
here we have three different layers.Here pixel value range is also between 0-255.
So how do we create a color image?
We put color values (between 0 to 225) in each layer. Then we combine those three layers and after that, we
get a color image.
In CNN, we apply a filter on the image. After applying filter, we will get a new filtered image and this
process is called convolution.
In consultation, we can use too many filters like edge detection, motion detection, how many faces are there
in the image, etc. After convolution we get our outputs. After getting the output, we apply an activation
function on each cell value of the filtered or output image and then we try to update it. In ANN we try to
update the weight but in CNN we try to update the values of filters. For updates we use optimizers.