Fisseha Berhane, PhD

Data Scientist

Resume Linkedin GitHub twitter twitter

Keras tutorial - the Happy House

Welcome to the first assignment of week 2. In this assignment, you will:

  1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
  2. See how you can in a couple of hours build a deep learning algorithm.

Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.

In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!

In [2]:
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *

import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow

%matplotlib inline

Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...).

1 - The Happy House

For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.

**Figure 1** : **the Happy House**

As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.

You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.

Run the following code to normalize the dataset and learn about its shapes.

In [3]:
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.

# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T

print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)

Details of the "Happy" dataset:

  • Images are of shape (64,64,3)
  • Training: 600 pictures
  • Test: 150 pictures

It is now time to solve the "Happy" Challenge.

2 - Building a model in Keras

Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.

Here is an example of a model in Keras:

def model(input_shape):
    # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
    X_input = Input(input_shape)

    # Zero-Padding: pads the border of X_input with zeroes
    X = ZeroPadding2D((3, 3))(X_input)

    # CONV -> BN -> RELU Block applied to X
    X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    X = Activation('relu')(X)

    # MAXPOOL
    X = MaxPooling2D((2, 2), name='max_pool')(X)

    # FLATTEN X (means convert it to a vector) + FULLYCONNECTED
    X = Flatten()(X)
    X = Dense(1, activation='sigmoid', name='fc')(X)

    # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
    model = Model(inputs = X_input, outputs = X, name='HappyModel')

    return model

Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as X, Z1, A1, Z2, A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above).

Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D(), GlobalMaxPooling2D(), Dropout().

Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.

In [5]:
# GRADED FUNCTION: HappyModel

def HappyModel(input_shape):
    """
    Implementation of the HappyModel.
    
    Arguments:
    input_shape -- shape of the images of the dataset

    Returns:
    model -- a Model() instance in Keras
    """
    
    ### START CODE HERE ###
    # Feel free to use the suggested outline in the text above to get started, and run through the whole
    # exercise (including the later portions of this notebook) once. The come back also try out other
    # network architectures as well. 
    # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
    X_input = Input(input_shape)

    # Zero-Padding: pads the border of X_input with zeroes
    X = ZeroPadding2D((3, 3))(X_input)

    # CONV -> BN -> RELU Block applied to X
    X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    X = Activation('relu')(X)

    # MAXPOOL
    X = MaxPooling2D((2, 2), name='max_pool')(X)

    # FLATTEN X (means convert it to a vector) + FULLYCONNECTED
    X = Flatten()(X)
    X = Dense(1, activation='sigmoid', name='fc')(X)

    # Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
    model = Model(inputs = X_input, outputs = X, name='HappyModel')
    
    ### END CODE HERE ###
    
    return model

You have now built a function to describe your model. To train and test this model, there are four steps in Keras:

  1. Create the model by calling the function above
  2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])
  3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)
  4. Test the model on test data by calling model.evaluate(x = ..., y = ...)

If you want to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation.

Exercise: Implement step 1, i.e. create the model.

In [6]:
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train.shape[1:])
### END CODE HERE ###

Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.

In [8]:
### START CODE HERE ### (1 line)
happyModel.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])
### END CODE HERE ###

Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.

In [9]:
### START CODE HERE ### (1 line)
happyModel.fit(X_train, Y_train, 
              batch_size =16,
              epochs =100,
              validation_data = (X_test, Y_test))
### END CODE HERE ###
Train on 600 samples, validate on 150 samples
Epoch 1/100
600/600 [==============================] - 13s - loss: 0.7597 - acc: 0.7533 - val_loss: 0.6448 - val_acc: 0.5600
Epoch 2/100
600/600 [==============================] - 13s - loss: 0.3351 - acc: 0.8467 - val_loss: 0.7893 - val_acc: 0.6067
Epoch 3/100
600/600 [==============================] - 13s - loss: 0.2448 - acc: 0.9117 - val_loss: 0.4201 - val_acc: 0.7867
Epoch 4/100
600/600 [==============================] - 13s - loss: 0.1839 - acc: 0.9400 - val_loss: 0.3059 - val_acc: 0.8933
Epoch 5/100
600/600 [==============================] - 13s - loss: 0.0927 - acc: 0.9650 - val_loss: 0.2677 - val_acc: 0.9133
Epoch 6/100
600/600 [==============================] - 13s - loss: 0.0866 - acc: 0.9700 - val_loss: 0.2763 - val_acc: 0.8867
Epoch 7/100
600/600 [==============================] - 13s - loss: 0.0874 - acc: 0.9667 - val_loss: 1.1154 - val_acc: 0.6067
Epoch 8/100
600/600 [==============================] - 13s - loss: 0.0881 - acc: 0.9667 - val_loss: 0.2083 - val_acc: 0.9400
Epoch 9/100
600/600 [==============================] - 13s - loss: 0.0434 - acc: 0.9867 - val_loss: 0.1511 - val_acc: 0.9333
Epoch 10/100
600/600 [==============================] - 13s - loss: 0.0597 - acc: 0.9783 - val_loss: 0.1487 - val_acc: 0.9267
Epoch 11/100
600/600 [==============================] - 13s - loss: 0.2792 - acc: 0.9200 - val_loss: 1.0850 - val_acc: 0.7533
Epoch 12/100
600/600 [==============================] - 13s - loss: 0.2084 - acc: 0.9317 - val_loss: 1.4307 - val_acc: 0.7667
Epoch 13/100
600/600 [==============================] - 13s - loss: 0.2819 - acc: 0.9167 - val_loss: 0.3686 - val_acc: 0.8867
Epoch 14/100
600/600 [==============================] - 13s - loss: 0.0732 - acc: 0.9800 - val_loss: 1.1618 - val_acc: 0.7200
Epoch 15/100
600/600 [==============================] - 13s - loss: 0.0650 - acc: 0.9833 - val_loss: 0.3913 - val_acc: 0.9000
Epoch 16/100
600/600 [==============================] - 13s - loss: 0.0586 - acc: 0.9867 - val_loss: 0.1053 - val_acc: 0.9733
Epoch 17/100
600/600 [==============================] - 13s - loss: 0.0822 - acc: 0.9750 - val_loss: 0.1106 - val_acc: 0.9600
Epoch 18/100
600/600 [==============================] - 13s - loss: 0.0894 - acc: 0.9700 - val_loss: 0.8804 - val_acc: 0.7533
Epoch 19/100
600/600 [==============================] - 13s - loss: 0.0896 - acc: 0.9750 - val_loss: 0.1571 - val_acc: 0.9200
Epoch 20/100
600/600 [==============================] - 13s - loss: 0.0472 - acc: 0.9867 - val_loss: 0.2839 - val_acc: 0.9000
Epoch 21/100
600/600 [==============================] - 13s - loss: 0.0501 - acc: 0.9800 - val_loss: 0.2882 - val_acc: 0.8800
Epoch 22/100
600/600 [==============================] - 13s - loss: 0.0445 - acc: 0.9833 - val_loss: 0.1515 - val_acc: 0.9400
Epoch 23/100
600/600 [==============================] - 13s - loss: 0.0259 - acc: 0.9917 - val_loss: 0.2761 - val_acc: 0.9067
Epoch 24/100
600/600 [==============================] - 13s - loss: 0.0400 - acc: 0.9917 - val_loss: 0.1279 - val_acc: 0.9400
Epoch 25/100
600/600 [==============================] - 13s - loss: 0.0438 - acc: 0.9817 - val_loss: 1.1221 - val_acc: 0.7533
Epoch 26/100
600/600 [==============================] - 13s - loss: 0.0948 - acc: 0.9767 - val_loss: 0.2971 - val_acc: 0.9000
Epoch 27/100
600/600 [==============================] - 13s - loss: 0.0350 - acc: 0.9850 - val_loss: 0.1246 - val_acc: 0.9400
Epoch 28/100
600/600 [==============================] - 13s - loss: 0.0287 - acc: 0.9917 - val_loss: 0.0833 - val_acc: 0.9733
Epoch 29/100
600/600 [==============================] - 14s - loss: 0.0305 - acc: 0.9933 - val_loss: 0.7595 - val_acc: 0.8467
Epoch 30/100
600/600 [==============================] - 13s - loss: 0.0348 - acc: 0.9850 - val_loss: 0.2087 - val_acc: 0.9067
Epoch 31/100
600/600 [==============================] - 13s - loss: 0.0140 - acc: 0.9967 - val_loss: 0.4403 - val_acc: 0.8667
Epoch 32/100
600/600 [==============================] - 13s - loss: 0.0340 - acc: 0.9850 - val_loss: 0.0877 - val_acc: 0.9667
Epoch 33/100
600/600 [==============================] - 14s - loss: 0.0447 - acc: 0.9850 - val_loss: 1.7104 - val_acc: 0.6467
Epoch 34/100
600/600 [==============================] - 13s - loss: 0.0511 - acc: 0.9750 - val_loss: 0.1159 - val_acc: 0.9533
Epoch 35/100
600/600 [==============================] - 13s - loss: 0.0242 - acc: 0.9900 - val_loss: 1.0062 - val_acc: 0.7733
Epoch 36/100
600/600 [==============================] - 13s - loss: 0.0065 - acc: 0.9967 - val_loss: 0.0782 - val_acc: 0.9733
Epoch 37/100
600/600 [==============================] - 13s - loss: 0.0314 - acc: 0.9883 - val_loss: 0.0621 - val_acc: 0.9733
Epoch 38/100
600/600 [==============================] - 13s - loss: 0.0217 - acc: 0.9900 - val_loss: 0.0952 - val_acc: 0.9667
Epoch 39/100
600/600 [==============================] - 13s - loss: 0.0084 - acc: 0.9967 - val_loss: 0.2924 - val_acc: 0.9067
Epoch 40/100
600/600 [==============================] - 13s - loss: 0.0611 - acc: 0.9817 - val_loss: 0.0666 - val_acc: 0.9800
Epoch 41/100
600/600 [==============================] - 13s - loss: 0.0312 - acc: 0.9950 - val_loss: 0.7322 - val_acc: 0.7933
Epoch 42/100
600/600 [==============================] - 13s - loss: 0.0558 - acc: 0.9883 - val_loss: 0.1057 - val_acc: 0.9600
Epoch 43/100
600/600 [==============================] - 14s - loss: 0.0836 - acc: 0.9750 - val_loss: 0.9094 - val_acc: 0.7400
Epoch 44/100
600/600 [==============================] - 13s - loss: 0.0536 - acc: 0.9817 - val_loss: 1.9369 - val_acc: 0.7667
Epoch 45/100
600/600 [==============================] - 13s - loss: 0.0704 - acc: 0.9733 - val_loss: 0.1968 - val_acc: 0.9267
Epoch 46/100
600/600 [==============================] - 13s - loss: 0.1707 - acc: 0.9583 - val_loss: 3.0167 - val_acc: 0.6133
Epoch 47/100
600/600 [==============================] - 14s - loss: 0.1154 - acc: 0.9650 - val_loss: 1.0540 - val_acc: 0.8400
Epoch 48/100
600/600 [==============================] - 14s - loss: 0.0647 - acc: 0.9783 - val_loss: 0.4754 - val_acc: 0.8667
Epoch 49/100
600/600 [==============================] - 14s - loss: 0.1745 - acc: 0.9633 - val_loss: 0.1879 - val_acc: 0.9333
Epoch 50/100
600/600 [==============================] - 14s - loss: 0.1117 - acc: 0.9767 - val_loss: 0.1632 - val_acc: 0.9667
Epoch 51/100
600/600 [==============================] - 14s - loss: 0.0445 - acc: 0.9867 - val_loss: 0.1275 - val_acc: 0.9667
Epoch 52/100
600/600 [==============================] - 14s - loss: 0.0210 - acc: 0.9917 - val_loss: 0.0901 - val_acc: 0.9733
Epoch 53/100
600/600 [==============================] - 14s - loss: 0.0035 - acc: 0.9983 - val_loss: 0.1079 - val_acc: 0.9733
Epoch 54/100
600/600 [==============================] - 14s - loss: 0.0061 - acc: 0.9983 - val_loss: 0.1565 - val_acc: 0.9467
Epoch 55/100
600/600 [==============================] - 14s - loss: 0.0300 - acc: 0.9900 - val_loss: 0.3697 - val_acc: 0.9133
Epoch 56/100
600/600 [==============================] - 14s - loss: 0.0245 - acc: 0.9900 - val_loss: 0.0898 - val_acc: 0.9667
Epoch 57/100
600/600 [==============================] - 14s - loss: 0.0096 - acc: 0.9933 - val_loss: 0.0863 - val_acc: 0.9667
Epoch 58/100
600/600 [==============================] - 14s - loss: 0.0315 - acc: 0.9900 - val_loss: 0.1187 - val_acc: 0.9600
Epoch 59/100
600/600 [==============================] - 14s - loss: 0.0497 - acc: 0.9850 - val_loss: 0.2507 - val_acc: 0.9533
Epoch 60/100
600/600 [==============================] - 14s - loss: 0.0674 - acc: 0.9867 - val_loss: 0.0758 - val_acc: 0.9667
Epoch 61/100
600/600 [==============================] - 14s - loss: 0.0247 - acc: 0.9883 - val_loss: 0.0837 - val_acc: 0.9733
Epoch 62/100
600/600 [==============================] - 13s - loss: 0.0273 - acc: 0.9933 - val_loss: 0.2454 - val_acc: 0.9467
Epoch 63/100
600/600 [==============================] - 13s - loss: 0.0228 - acc: 0.9900 - val_loss: 0.0721 - val_acc: 0.9667
Epoch 64/100
600/600 [==============================] - 13s - loss: 0.0112 - acc: 0.9967 - val_loss: 0.2193 - val_acc: 0.9533
Epoch 65/100
600/600 [==============================] - 13s - loss: 0.0432 - acc: 0.9883 - val_loss: 0.0986 - val_acc: 0.9733
Epoch 66/100
600/600 [==============================] - 13s - loss: 0.0380 - acc: 0.9933 - val_loss: 0.1221 - val_acc: 0.9533
Epoch 67/100
600/600 [==============================] - 13s - loss: 0.0100 - acc: 0.9967 - val_loss: 0.1710 - val_acc: 0.9467
Epoch 68/100
600/600 [==============================] - 13s - loss: 0.0240 - acc: 0.9950 - val_loss: 0.1549 - val_acc: 0.9667
Epoch 69/100
600/600 [==============================] - 13s - loss: 0.0500 - acc: 0.9867 - val_loss: 0.5269 - val_acc: 0.8867
Epoch 70/100
600/600 [==============================] - 13s - loss: 0.0050 - acc: 0.9983 - val_loss: 0.5677 - val_acc: 0.8867
Epoch 71/100
600/600 [==============================] - 13s - loss: 0.0049 - acc: 0.9983 - val_loss: 0.2869 - val_acc: 0.9267
Epoch 72/100
600/600 [==============================] - 13s - loss: 0.0023 - acc: 1.0000 - val_loss: 0.1034 - val_acc: 0.9667
Epoch 73/100
600/600 [==============================] - 13s - loss: 4.3090e-04 - acc: 1.0000 - val_loss: 0.0745 - val_acc: 0.9800
Epoch 74/100
600/600 [==============================] - 13s - loss: 3.3906e-04 - acc: 1.0000 - val_loss: 0.0751 - val_acc: 0.9733
Epoch 75/100
600/600 [==============================] - 13s - loss: 2.0612e-04 - acc: 1.0000 - val_loss: 0.0761 - val_acc: 0.9733
Epoch 76/100
600/600 [==============================] - 13s - loss: 1.0635e-04 - acc: 1.0000 - val_loss: 0.0757 - val_acc: 0.9733
Epoch 77/100
600/600 [==============================] - 13s - loss: 2.9767e-04 - acc: 1.0000 - val_loss: 0.0887 - val_acc: 0.9800
Epoch 78/100
600/600 [==============================] - 13s - loss: 2.1159e-04 - acc: 1.0000 - val_loss: 0.0743 - val_acc: 0.9667
Epoch 79/100
600/600 [==============================] - 13s - loss: 1.3721e-04 - acc: 1.0000 - val_loss: 0.0748 - val_acc: 0.9667
Epoch 80/100
600/600 [==============================] - 13s - loss: 1.6440e-04 - acc: 1.0000 - val_loss: 0.0732 - val_acc: 0.9733
Epoch 81/100
600/600 [==============================] - 13s - loss: 2.0216e-04 - acc: 1.0000 - val_loss: 0.0823 - val_acc: 0.9800
Epoch 82/100
600/600 [==============================] - 13s - loss: 9.5052e-05 - acc: 1.0000 - val_loss: 0.0741 - val_acc: 0.9667
Epoch 83/100
600/600 [==============================] - 13s - loss: 7.5819e-05 - acc: 1.0000 - val_loss: 0.0744 - val_acc: 0.9667
Epoch 84/100
600/600 [==============================] - 13s - loss: 1.1077e-04 - acc: 1.0000 - val_loss: 0.0807 - val_acc: 0.9733
Epoch 85/100
600/600 [==============================] - 13s - loss: 5.1231e-04 - acc: 1.0000 - val_loss: 0.0896 - val_acc: 0.9800
Epoch 86/100
600/600 [==============================] - 13s - loss: 0.0350 - acc: 0.9917 - val_loss: 0.2143 - val_acc: 0.9600
Epoch 87/100
600/600 [==============================] - 13s - loss: 0.0285 - acc: 0.9933 - val_loss: 1.4956 - val_acc: 0.7333
Epoch 88/100
600/600 [==============================] - 13s - loss: 0.0380 - acc: 0.9867 - val_loss: 0.2301 - val_acc: 0.9533
Epoch 89/100
600/600 [==============================] - 13s - loss: 0.0445 - acc: 0.9867 - val_loss: 0.2951 - val_acc: 0.9133
Epoch 90/100
600/600 [==============================] - 13s - loss: 0.0544 - acc: 0.9900 - val_loss: 0.2553 - val_acc: 0.9267
Epoch 91/100
600/600 [==============================] - 13s - loss: 0.0540 - acc: 0.9850 - val_loss: 0.2139 - val_acc: 0.9200
Epoch 92/100
600/600 [==============================] - 13s - loss: 0.0130 - acc: 0.9967 - val_loss: 0.2067 - val_acc: 0.9600
Epoch 93/100
600/600 [==============================] - 13s - loss: 0.0883 - acc: 0.9767 - val_loss: 2.9100 - val_acc: 0.6667
Epoch 94/100
600/600 [==============================] - 13s - loss: 0.0429 - acc: 0.9917 - val_loss: 0.0911 - val_acc: 0.9467
Epoch 95/100
600/600 [==============================] - 13s - loss: 0.0104 - acc: 0.9983 - val_loss: 0.1569 - val_acc: 0.9467
Epoch 96/100
600/600 [==============================] - 13s - loss: 0.0189 - acc: 0.9967 - val_loss: 0.1647 - val_acc: 0.9333
Epoch 97/100
600/600 [==============================] - 13s - loss: 0.0120 - acc: 0.9967 - val_loss: 0.1425 - val_acc: 0.9467
Epoch 98/100
600/600 [==============================] - 13s - loss: 0.0165 - acc: 0.9950 - val_loss: 0.0619 - val_acc: 0.9600
Epoch 99/100
600/600 [==============================] - 13s - loss: 0.0024 - acc: 1.0000 - val_loss: 0.2694 - val_acc: 0.9533
Epoch 100/100
600/600 [==============================] - 13s - loss: 0.0068 - acc: 0.9950 - val_loss: 0.3329 - val_acc: 0.9333
Out[9]:
<keras.callbacks.History at 0x7f86113465f8>

Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.

Exercise: Implement step 4, i.e. test/evaluate the model.

In [ ]:
### START CODE HERE ### (1 line)
preds = happyModel.fit(X_test, Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))

If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.

To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.

If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:

  • Try using blocks of CONV->BATCHNORM->RELU such as:
    X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
    X = BatchNormalization(axis = 3, name = 'bn0')(X)
    X = Activation('relu')(X)
    
    until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
  • You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
  • Change your optimizer. We find Adam works well.
  • If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
  • Run on more epochs, until you see the train accuracy plateauing.

Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results.

Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.

3 - Conclusion

Congratulations, you have solved the Happy House challenge!

Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.

What we would like you to remember from this assignment:

  • Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
  • Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.

4 - Test with your own image (Optional)

Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:

1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!

The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!

In [ ]:
### START CODE HERE ###
img_path = 'my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

print(happyModel.predict(x))

5 - Other useful functions in Keras (Optional)

Two other basic features of Keras that you'll find useful are:

  • model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs
  • plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.

Run the following code.

In [ ]:
happyModel.summary()
In [ ]:
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))


comments powered by Disqus