Learn Python

Learn Data Structure & Algorithm

Learn Numpy

Learn Pandas

Learn Matplotlib

Learn Seaborn

Learn Statistics

Learn Math

Learn MATLAB

Learn Machine learning

Learn Github

Learn OpenCV

Introduction

Setup

ANN

Working process ANN

Propagation

Bias parameter

Activation function

Loss function

Overfitting and Underfitting

Optimization function

Chain rule

Minima

Gradient problem

Weight initialization

Dropout

ANN Regression Exercise

ANN Classification Exercise

Hyper parameter tuning

CNN

CNN basics

Convolution

Padding

Pooling

Data argumentation

Flattening

Create Custom Dataset

Binary Classification Exercise

Multiclass Classification Exercise

Transfer learning

Transfer model Basic template

RNN

How RNN works

LSTM

Bidirectional RNN

Sequence to sequence

Attention model

Transformer model

Bag of words

Tokenization & Stop words

Stemming & Lemmatization

TF-IDF

N-Gram

Word embedding

Normalization

Pos tagging

Parser

semantic analysis

Regular expression

Learn MySQL

Learn MongoDB

Learn Web scraping

Learn Excel

Learn Power BI

Learn Tableau

Learn Docker

Learn Hadoop

Machine learning exercise 1

Dataset Link

Install keras

#!pip install tensorflow
#!pip install keras

Importing Libraries

#Basic libraries
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import tensorflow
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from sklearn import metrics
from sklearn.metrics import confusion_matrix

Getting the data

df = pd.read_csv("D:/diabetes.csv")
df.head()

Let's create a dataset using the number of missing value present in the dataset and do sorting

x=[]
z=[]
for i in df:
    v=df[i].isnull().sum()/df.shape[0]*100
    x.append(v)
    z.append(i)
q={"Feature Name":z,"Percentage of missing values":x,}
missingPercentageDataset=pd.DataFrame(q).sort_values(by="Percentage of missing values",ascending=False)
pd.set_option("display.max_rows",None)
missingPercentageDataset

Separating data and labels

x=df.drop(columns="Outcome",axis=1)

Data standardization

we have to perform standardization only on the independent variable

scaler= StandardScaler()
transform_scale_data=scaler.fit_transform(x)
transform_scale_data

Let's see the final X(all the independent variables) and Y(dependent) variable

Look there can be a question and that is we already took all independent variable on the x variable and dependent variable in y variable using the data set. So now here why we are doing the same thing?

Look the answer is very easy. We have to perform standardization in classification problem just on all the independent variables not on the dependent variable. So previously we just separate the dataset means took all the dependent variable in x and the dependent variable in the y. Then we perform standardization on the x variable. After performing standardization we will use these transformed independent variable for the training. So we have to take all these transformed independent variables in the x and the dependent variable in y.

X=transform_scale_data
Y=df["Outcome"]

Splitting data into train and test data

X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2,stratify=Y, random_state=51)
print(X.shape, X_train.shape, X_test.shape)

Creating ANN model

Initializing the model

'''
#Step 1: Sequential is used to initializing our ANN model In this step we will create ann object by using a class of Keras named Sequential.
'''
ann=Sequential()

Adding the input layer and first hidden layer

'''
Step 2:
To adding different layers of ANN dense is used. In this code example we will create a neural network where we will have one input layer, two hidden layer and one output layer

Let's create first input and hidden layer
'''
ann.add(Dense(units=6, input_dim=11, kernel_initializer='he_normal',activation="relu", ))
ann.add(Dropout(rate = 0.1))
'''
Parameters:
1. units= Here we define the number of neurons of the hidden layer. Without parameter tuning, the number of hidden neurons should be half the size of the input layer. Here we have 11 features so we use 11 neurons in the input layer. So for hidden layer use half means 6 neurons. But again you can use any but it is better to use parameter tuning for selection.
2. input_dim= here we write the number of neurons of the input layer. The number of the neuron of the input layer is equal to the number of features or variables are present in the dataset(X_train data).
3. kernel_initializer= Here we pass the name of weight initialization technique which we want to use.
4. activation= here we write the activation function name which we will use.
'''

Adding the Second Hidden layer

ann.add(Dense(units=6, kernel_initializer='he_normal', activation="relu"))
ann.add(Dropout(rate = 0.1))
'''
If you want to add more layers then do the same thing agin. You can change the value of the parameters if you want like change the number of neurons etc.
'''

Adding the third Hidden layer

ann.add(Dense(units=6, kernel_initializer='he_normal', activation="relu"))
ann.add(Dropout(rate = 0.1))

Adding the output layer

ann.add(Dense(units=1,kernel_initializer = 'glorot_normal', activation="sigmoid"))
'''
Here output_dim = 1 means in the output layer we want only one node or neuron. In ann classification if the target variable contain two unique values/class/category like yes or no or we can if it is a binary classification problem then use 1 neuron in the output node. If there are more than 2 unique values/class/category then use the a neuron for each class. Suppose the output layer containing 6 categories so 6 neurons will be used.
'''

Compiling ANN

ann.compile(optimizer="adam",loss="binary_crossentropy",metrics=['accuracy'])
'''
Parameters:
1. optimizer= pass the name of the optimizer which you want to use
2. loss= pass the name of the loss function which you want to use
1. metrics= pass the name of the metrics which you want to use
'''

Fitting the the model

ann.fit(X_train,Y_train,validation_split=0.33,batch_size=12,epochs = 100)

Score and accuracy of train data

score, acc = ann.evaluate(X_train, Y_train, batch_size=10)
print('Train score:', score)
print('Train accuracy:', acc)

Score and accuracy of test data

score, acc = ann.evaluate(X_test, Y_test, batch_size=10)
print('Test score:', score)
print('Test accuracy:', acc)

Confusion Matrix

y_pred = ann.predict(X_test)
y_pred = (y_pred > 0.5)
cm = confusion_matrix(Y_test, y_pred)


import seaborn as sns
import matplotlib.pyplot as plt
p = sns.heatmap(pd.DataFrame(cm), annot=True, cmap="YlGnBu" ,fmt='g')
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')

Classification report

from sklearn.metrics import classification_report
print(classification_report(Y_test,y_pred))

Prediction system

#getting the input
input_data=(0,1,0,1,0,3000,0.0,66.0,360.0,1.0,1 )

#changing the input_data to a numpy array
input_data_as_numpy_array = np.asarray(input_data)

#reshape the np array as we are predicting for one instance
input_data_reshaped=input_data_as_numpy_array.reshape(1,-1)

prediction=ann.predict(input_data_reshaped)

if (prediction[0]==0):
    print("Loan status no")
else:
    print("Loan status yes")

CodersAim is created for learning and training a self learner to become a professional from beginner. While using CodersAim, you agree to have read and accepted our terms of use, privacy policy, Contact Us

© Copyright All rights reserved www.CodersAim.com. Developed by CodersAim.