Data Normalization and Feature Reduction

Problem Statement

  • How Data normalization and Feature Reduction affect the Deep Neural Network Model.
  • Time Taken and the Accuracy score is the parameter on which we will compare the result.

How and What to Compare

  • We will take the MNIST dataset, and try to classify the Digits using Tensorflow Library DNN Classifier. [Complete code]
  • We will have the following Hyper Parameters
    • Hidden Layer Array
    • Batch Size
    • Number Of Epochs

Preparation

  • Import Dependencies
  • Load Data.

import tensorflow as tf
import numpy as np

from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
import time

 

(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28)
X_test = X_test.astype(np.float32).reshape(-1, 28*28)
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)

Initialize Hyper Parameters


hidden_units = [300, 100]
batch_size = 100
## note we will not use same number of steps for all the cases , as in some cases our model will converge very fast
steps = 5000
model_dir = "/Users/amitjain/personalProjects/Machine-Learning/Handon-ML/tf_logs/mnsit"

Define Model And Prediction Function


def model(name, features, target, classes, batch_size, epochs, hidden_layers):
t0 = time.time()

feature_cols = tf.contrib.learn.infer_real_valued_columns_from_input(features)
dnn_clf = tf.contrib.learn.DNNClassifier(hidden_units = hidden_layers, n_classes=classes, feature_columns= feature_cols,
model_dir=model_dir + name)
dnn_clf.fit(features, target, batch_size = batch_size, steps = epochs)
t1 = time.time()
return dnn_clf, t1-t0


def prediction(model, test_feature, test_target):
y_pred = model.predict(test_feature)
predictions = list(y_pred)
print('Accuracy Score', accuracy_score(test_target, predictions))
print('Classification Report', classification_report(y_test, predictions)) 

 

Run Without Normalization and Feature Reduction

Screenshot 2019-04-20 at 1.35.48 PM.png

Run Without Normalization but with Feature Reduction

Screenshot 2019-04-20 at 1.36.07 PM.png

Run With Normalization But Without Feature Reduction

Screenshot 2019-04-20 at 1.35.58 PM.png

Run With Normalization But With Feature Reduction

Screenshot 2019-04-20 at 1.36.14 PM.png

Conclusion

  • Normalization Help to Converge Faster and also converge Better.
  • Feature Reduction mostly helps to converge Faster, not much impact on model accuracy.
Advertisements

Different dropout in Tensorflow

Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. It is a very efficient way of performing model averaging with neural networks.

neural-networks-in-the-wild-handwriting-recognition-29-638

Why Dropout: Dropout helps prevent weights from converging to identical positions. It does this by randomly turning nodes off when forward propagating.

In simple terms some of your neurons will not participate in the calculation.

In Tensorflow we have two dropout functions.

  • tf.nn.dropout
    • It has parameter  keep_prop which state the probability of neuron which will not be drop. If we dive keep_prop value to 0.6, it means 60% of neurons will remain and 40% will be drop.
  • tf.layers.dropout
    • It has two main parameters, rate, and training. Rate means the number of neurons which will be drop.If the rate is 0.6 then 60% neurons will be dropped and 40% will be used.
    • We can see keep_prop = 1- rate.
    • Training parameter is to differentiate if the network is running for training or to get the result.We need dropout only when we are training the neural network, not while we are testing(inferencing) it.

So what is the difference between these two functions?

tf.layers.dropout is a wrapper over the tf.nn.droput, which give us the demarcation of whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).”