Face Recognition using Convolutional Neural Networks

deep learning
computer vision
CNN
Building a CNN model for facial recognition using the ORL database.
Author

Chaance Graves

Published

November 21, 2023

Open In Colab

Project Description

Facial recognition technology has numerous applications in security, authentication, and personalization systems. This project implements a Convolutional Neural Network (CNN) to accurately identify individuals from facial images using the ORL database of faces.

Import the relevant packages and necessary dependencies

# Python built-in libraries
from pathlib import Path

# Data pre-processing and visualization
import cv2
import numpy as np
import matplotlib.pyplot as plt

# Sci-kit learn functions
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# For model training and compilation
from keras import layers
from keras.utils import to_categorical
from keras.models import Sequential
from keras.callbacks import EarlyStopping

# suppress warnings output messages
import warnings
warnings.filterwarnings('ignore')
2025-04-24 08:40:04.323420: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-04-24 08:40:04.325411: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-04-24 08:40:04.329396: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-04-24 08:40:04.338178: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1745502004.352729   13156 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1745502004.357495   13156 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1745502004.377104   13156 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1745502004.377134   13156 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1745502004.377136   13156 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1745502004.377137   13156 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-04-24 08:40:04.384672: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.

Upload and import the data

Let’s load and normalize the images

image_dir = Path('datasets/ORL_face_database')

image_dir
PosixPath('datasets/ORL_face_database')
# Create empty arrays to store the training and test images and labels:
images = []
labels = []
# Iterate over the subdirectories of the dataset (representing different classes or labels)
for person_dir in image_dir.iterdir():
    if person_dir.is_dir():
        label = int(person_dir.name[1:])

        for image_file in person_dir.iterdir():
            image = cv2.imread(str(image_file))
            grayscale_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            numpy_image = np.array(grayscale_image)

            # Append images and labels directly to the trainX and trainY lists
            images.append(numpy_image)
            labels.append(label)
# Convert the image data and labels into NumPy arrays
images = np.array(images)
images = images.astype("float32") / 255  # Normalize the images
labels = np.array(labels)
print(f'normalized image data: {images[:]}')
print('\n')
# Check the shapes after modifications
print(f"numpy images shape: {images.shape}")
print(f"labels: {labels.shape}")
normalized image data: [[[0.3372549  0.3529412  0.34117648 ... 0.32941177 0.33333334 0.34117648]
  [0.3647059  0.3372549  0.34117648 ... 0.34509805 0.32941177 0.34509805]
  [0.34901962 0.3372549  0.34509805 ... 0.34509805 0.32941177 0.34117648]
  ...
  [0.38039216 0.40392157 0.49411765 ... 0.42745098 0.36862746 0.31764707]
  [0.69411767 0.7137255  0.7490196  ... 0.5294118  0.49803922 0.52156866]
  [0.79607844 0.79607844 0.79607844 ... 0.6        0.5568628  0.54509807]]

 [[0.37254903 0.34901962 0.34901962 ... 0.40392157 0.4392157  0.35686275]
  [0.37254903 0.37254903 0.3647059  ... 0.34901962 0.41960785 0.43529412]
  [0.35686275 0.37254903 0.36862746 ... 0.3137255  0.3882353  0.39215687]
  ...
  [0.6784314  0.81960785 0.8156863  ... 0.22352941 0.21176471 0.21176471]
  [0.7058824  0.81960785 0.81960785 ... 0.21176471 0.22745098 0.21176471]
  [0.72156864 0.8117647  0.8156863  ... 0.16862746 0.21568628 0.21568628]]

 [[0.35686275 0.3254902  0.27450982 ... 0.63529414 0.6313726  0.6156863 ]
  [0.32156864 0.31764707 0.28235295 ... 0.5803922  0.64705884 0.63529414]
  [0.3372549  0.2901961  0.27450982 ... 0.5411765  0.6431373  0.65882355]
  ...
  [0.21176471 0.19607843 0.20392157 ... 0.6117647  0.70980394 0.75686276]
  [0.20392157 0.2        0.2        ... 0.64705884 0.7294118  0.7647059 ]
  [0.1882353  0.21960784 0.1764706  ... 0.67058825 0.7372549  0.74509805]]

 ...

 [[0.52156866 0.5058824  0.52156866 ... 0.49411765 0.49411765 0.4862745 ]
  [0.5058824  0.5176471  0.5137255  ... 0.49019608 0.5019608  0.49411765]
  [0.5176471  0.5058824  0.5137255  ... 0.49411765 0.49019608 0.49803922]
  ...
  [0.11764706 0.12156863 0.07843138 ... 0.08627451 0.12941177 0.1254902 ]
  [0.11764706 0.10196079 0.09019608 ... 0.08235294 0.11372549 0.12156863]
  [0.09019608 0.11764706 0.08235294 ... 0.05882353 0.11372549 0.10196079]]

 [[0.53333336 0.53333336 0.53333336 ... 0.5019608  0.52156866 0.52156866]
  [0.5372549  0.5372549  0.5411765  ... 0.50980395 0.52156866 0.49803922]
  [0.5254902  0.54901963 0.5372549  ... 0.5254902  0.52156866 0.5019608 ]
  ...
  [0.23137255 0.23529412 0.21568628 ... 0.0627451  0.08627451 0.08627451]
  [0.25490198 0.23921569 0.1882353  ... 0.06666667 0.08627451 0.07450981]
  [0.23529412 0.1882353  0.21960784 ... 0.07450981 0.07843138 0.08235294]]

 [[0.5254902  0.5411765  0.53333336 ... 0.5058824  0.5137255  0.49803922]
  [0.5254902  0.5294118  0.5294118  ... 0.50980395 0.50980395 0.5058824 ]
  [0.52156866 0.5254902  0.5294118  ... 0.49803922 0.5176471  0.52156866]
  ...
  [0.09803922 0.07450981 0.08235294 ... 0.10588235 0.22352941 0.31764707]
  [0.10196079 0.08627451 0.07450981 ... 0.11764706 0.2784314  0.31764707]
  [0.08235294 0.09803922 0.08627451 ... 0.21568628 0.30980393 0.31764707]]]


numpy images shape: (400, 112, 92)
labels: (400,)

Let’s visualize the Images and Train and Test Dataset

# Define the number of images you want to plot
num_images_to_plot = 25  # Change this number as needed

# Reshape the images to (112, 92)
# Assuming x_train has shape (num_samples, 112, 92)
reshaped_images = images[:num_images_to_plot].reshape(-1, 112, 92)

# Plot the images
plt.figure(figsize=(12, 12))
for i in range(num_images_to_plot):
    plt.subplot(5, 5, i + 1)  # Change the subplot layout as per your preference
    plt.imshow(reshaped_images[i], cmap='gray')
    # Set the title as per the corresponding label
    plt.title(f'Subject Person: {labels[i]}\n ({reshaped_images[i].shape[1]}, {reshaped_images[i].shape[0]})')
    #plt.axis('off')  # Hide axes
plt.tight_layout()
plt.show()

Split the Data into Train, Test and Validation Sets

X_data = images # store images in X_data
Y_data = labels.reshape(-1,1) # store labels in Y_data

# Find unique classes in the labels
unique_labels = np.unique(Y_data)

unique_labels
array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17,
       18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
       35, 36, 37, 38, 39, 40])
# Reindex the labels to start from 0
label_mapping = {label: index for index, label in enumerate(unique_labels)}
Y_data_reindexed = np.array([label_mapping[label[0]] for label in Y_data])

# Verify the unique values in the reindexed labels
print(np.unique(Y_data_reindexed))
[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39]
  1. Initial Training and Test Split:
    • Initially, you’ve split the data into x_train (training data) and x_test (test data).
  2. Create Validation Set:
    • To generate a validation set from the existing training data (x_train), you can perform another split. This split will create a subset designated for validation purposes.
# take a random sample: 80% of the data for the test set

# The resulting variables will represent:
# x_train: Training data
# y_train: Corresponding training labels
# x_test: Test data
# y_test: Corresponding test labels

x_train, x_test, y_train, y_test = train_test_split(X_data, Y_data_reindexed, test_size=0.2, random_state=42)

print(f'x_train: {x_train.shape}')
print(f'x_test: {x_test.shape}')
print(f'y_train: {y_train.shape}')
print(f'y_test: {y_test.shape}')
x_train: (320, 112, 92)
x_test: (80, 112, 92)
y_train: (320,)
y_test: (80,)
# Split the training data further into x_train, x_val, y_train, y_val

# The resulting variables will represent:
# x_val: Validation data
# y_val: Corresponding validation labels

x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.05, random_state=42)

print(f'x_train: {x_train.shape}')
print(f'x_val: {x_val.shape}')
print(f'y_train: {y_train.shape}')
print(f'y_val: {y_val.shape}')
x_train: (304, 112, 92)
x_val: (16, 112, 92)
y_train: (304,)
y_val: (16,)
  • x_train, y_train: These represent the primary training dataset and labels, comprising 60% of the original data (80% of 80%).
  • x_test, y_test: These represent the test dataset and labels, comprising 20% of the original data.
  • x_val, y_val: These represent the validation dataset and labels, comprising 5% of the original dataset.

Check the images are equal sizes to prepare the data for input to the CNN model

# Assuming x_train and x_val contain the image data
# Reshape the input data to match the expected input shape
x_train = x_train.reshape(-1, 112, 92, 1)
x_test = x_test.reshape(-1, 112, 92, 1)
x_val = x_val.reshape(-1, 112, 92, 1)
input_shape = (x_train.shape[1], x_train.shape[2], 1)

print(f'Input Shape: {input_shape}')
print(f'x_train: {x_train.shape}')
print(f'x_test: {x_test.shape}')
print(f'x_val: {x_val.shape}')
Input Shape: (112, 92, 1)
x_train: (304, 112, 92, 1)
x_test: (80, 112, 92, 1)
x_val: (16, 112, 92, 1)
# Convert labels to categorical format
num_classes = len(np.unique(Y_data_reindexed))

# Convert integer labels to one-hot encoded labels
y_train_categorical = to_categorical(y_train, num_classes)
y_test_categorical = to_categorical(y_test, num_classes)
y_val_categorical = to_categorical(y_val, num_classes)

print(f'The number of the classes: {num_classes}')
print(f'y_train categorical shape: {y_train_categorical.shape}')
print(f'y_test categorical shape: {y_test_categorical.shape}')
print(f'y_val categorical shape: {y_val_categorical.shape}')
The number of the classes: 40
y_train categorical shape: (304, 40)
y_test categorical shape: (80, 40)
y_val categorical shape: (16, 40)

Build the CNN Model

  1. Convolotional layer
  2. pooling layer
  3. fully connected layer

Let’s build a new architecture of CNN by changing the number and position of layers.

# Adding the hidden layers and the output layer to our model
cnn_model = Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape= input_shape),
layers.BatchNormalization(),
layers.MaxPooling2D((2, 2)),
#layers.BatchNormalization(),
layers.Conv2D(64, (3, 3), activation='relu', input_shape= input_shape), # Additional Conv2D layer
#layers.BatchNormalization(),
layers.MaxPooling2D((2, 2)),

# Fully Connected
layers.Flatten(),

layers.Dense(256, activation='relu'),
# Dense layers with Dropout
layers.Dropout(0.5),
layers.Dense(128, activation='relu'),
#layers.Dropout(0.5),
layers.Dense(num_classes, activation='softmax')
])

# Display the summary of the model architecture and the number of parameters
cnn_model.summary()
2025-04-24 08:40:08.129008: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ conv2d (Conv2D)                 │ (None, 110, 90, 32)    │           320 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ batch_normalization             │ (None, 110, 90, 32)    │           128 │
│ (BatchNormalization)            │                        │               │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d (MaxPooling2D)    │ (None, 55, 45, 32)     │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_1 (Conv2D)               │ (None, 53, 43, 64)     │        18,496 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_1 (MaxPooling2D)  │ (None, 26, 21, 64)     │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ flatten (Flatten)               │ (None, 34944)          │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense (Dense)                   │ (None, 256)            │     8,945,920 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout (Dropout)               │ (None, 256)            │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_1 (Dense)                 │ (None, 128)            │        32,896 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_2 (Dense)                 │ (None, 40)             │         5,160 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 9,002,920 (34.34 MB)
 Trainable params: 9,002,856 (34.34 MB)
 Non-trainable params: 64 (256.00 B)
# Compile the model
cnn_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

Train the Model

Train the model with 100 epochs and we’ll plot training loss and accuracy against epochs. We want to monitor the validation loss at each epoch and after the validation loss has not improved after 10 epochs, training is interrupted.

# Define the early stopping callback
early_stopping = EarlyStopping(monitor='val_loss',
                               patience=10,
                               restore_best_weights=True)
history = cnn_model.fit(np.array(x_train), y_train_categorical,
                        epochs=100,
                        verbose=2,
                        validation_data=(np.array(x_val), y_val_categorical),
                        callbacks=[early_stopping])
Epoch 1/100
10/10 - 3s - 267ms/step - accuracy: 0.0230 - loss: 5.9299 - val_accuracy: 0.1250 - val_loss: 3.6867
Epoch 2/100
10/10 - 1s - 111ms/step - accuracy: 0.0987 - loss: 3.5003 - val_accuracy: 0.0000e+00 - val_loss: 3.6894
Epoch 3/100
10/10 - 1s - 111ms/step - accuracy: 0.1809 - loss: 3.1518 - val_accuracy: 0.0000e+00 - val_loss: 3.6889
Epoch 4/100
10/10 - 1s - 113ms/step - accuracy: 0.3520 - loss: 2.6107 - val_accuracy: 0.0000e+00 - val_loss: 3.6785
Epoch 5/100
10/10 - 4s - 417ms/step - accuracy: 0.4112 - loss: 2.1143 - val_accuracy: 0.0625 - val_loss: 3.6438
Epoch 6/100
10/10 - 1s - 113ms/step - accuracy: 0.5329 - loss: 1.6213 - val_accuracy: 0.0625 - val_loss: 3.6190
Epoch 7/100
10/10 - 1s - 111ms/step - accuracy: 0.7039 - loss: 1.0945 - val_accuracy: 0.0625 - val_loss: 3.6088
Epoch 8/100
10/10 - 1s - 115ms/step - accuracy: 0.6776 - loss: 1.0109 - val_accuracy: 0.0625 - val_loss: 3.5772
Epoch 9/100
10/10 - 1s - 114ms/step - accuracy: 0.7730 - loss: 0.7799 - val_accuracy: 0.0000e+00 - val_loss: 3.5266
Epoch 10/100
10/10 - 1s - 113ms/step - accuracy: 0.7434 - loss: 0.7901 - val_accuracy: 0.1875 - val_loss: 3.5233
Epoch 11/100
10/10 - 1s - 111ms/step - accuracy: 0.8783 - loss: 0.4624 - val_accuracy: 0.1250 - val_loss: 3.4865
Epoch 12/100
10/10 - 1s - 109ms/step - accuracy: 0.8586 - loss: 0.4446 - val_accuracy: 0.3125 - val_loss: 3.4084
Epoch 13/100
10/10 - 1s - 109ms/step - accuracy: 0.8520 - loss: 0.4221 - val_accuracy: 0.2500 - val_loss: 3.4584
Epoch 14/100
10/10 - 1s - 111ms/step - accuracy: 0.8783 - loss: 0.4136 - val_accuracy: 0.3750 - val_loss: 3.4547
Epoch 15/100
10/10 - 1s - 108ms/step - accuracy: 0.8914 - loss: 0.3152 - val_accuracy: 0.3125 - val_loss: 3.4135
Epoch 16/100
10/10 - 1s - 111ms/step - accuracy: 0.9079 - loss: 0.3323 - val_accuracy: 0.5000 - val_loss: 3.3642
Epoch 17/100
10/10 - 1s - 109ms/step - accuracy: 0.9243 - loss: 0.2643 - val_accuracy: 0.5000 - val_loss: 3.3989
Epoch 18/100
10/10 - 1s - 114ms/step - accuracy: 0.8980 - loss: 0.3166 - val_accuracy: 0.6250 - val_loss: 3.3122
Epoch 19/100
10/10 - 1s - 110ms/step - accuracy: 0.9145 - loss: 0.2861 - val_accuracy: 0.7500 - val_loss: 3.2894
Epoch 20/100
10/10 - 1s - 110ms/step - accuracy: 0.9342 - loss: 0.2391 - val_accuracy: 0.8125 - val_loss: 3.2451
Epoch 21/100
10/10 - 1s - 109ms/step - accuracy: 0.9276 - loss: 0.2563 - val_accuracy: 0.7500 - val_loss: 3.2335
Epoch 22/100
10/10 - 1s - 112ms/step - accuracy: 0.9243 - loss: 0.2438 - val_accuracy: 0.6875 - val_loss: 3.2463
Epoch 23/100
10/10 - 1s - 115ms/step - accuracy: 0.9441 - loss: 0.2171 - val_accuracy: 0.6875 - val_loss: 3.1442
Epoch 24/100
10/10 - 1s - 111ms/step - accuracy: 0.9276 - loss: 0.2010 - val_accuracy: 0.8125 - val_loss: 2.9902
Epoch 25/100
10/10 - 1s - 109ms/step - accuracy: 0.9605 - loss: 0.1603 - val_accuracy: 0.8750 - val_loss: 2.7800
Epoch 26/100
10/10 - 1s - 110ms/step - accuracy: 0.9572 - loss: 0.1312 - val_accuracy: 0.7500 - val_loss: 2.7827
Epoch 27/100
10/10 - 1s - 110ms/step - accuracy: 0.9638 - loss: 0.1356 - val_accuracy: 0.8125 - val_loss: 2.7697
Epoch 28/100
10/10 - 1s - 112ms/step - accuracy: 0.9474 - loss: 0.1750 - val_accuracy: 0.8125 - val_loss: 2.6994
Epoch 29/100
10/10 - 1s - 110ms/step - accuracy: 0.9474 - loss: 0.1363 - val_accuracy: 0.9375 - val_loss: 2.6153
Epoch 30/100
10/10 - 1s - 112ms/step - accuracy: 0.9276 - loss: 0.1971 - val_accuracy: 0.8125 - val_loss: 2.3204
Epoch 31/100
10/10 - 1s - 116ms/step - accuracy: 0.9375 - loss: 0.2138 - val_accuracy: 0.8125 - val_loss: 2.2279
Epoch 32/100
10/10 - 4s - 423ms/step - accuracy: 0.9408 - loss: 0.1866 - val_accuracy: 1.0000 - val_loss: 2.0679
Epoch 33/100
10/10 - 1s - 116ms/step - accuracy: 0.9474 - loss: 0.1778 - val_accuracy: 0.8750 - val_loss: 2.2333
Epoch 34/100
10/10 - 1s - 111ms/step - accuracy: 0.9539 - loss: 0.1598 - val_accuracy: 1.0000 - val_loss: 2.3613
Epoch 35/100
10/10 - 1s - 113ms/step - accuracy: 0.9605 - loss: 0.1347 - val_accuracy: 1.0000 - val_loss: 2.1150
Epoch 36/100
10/10 - 1s - 112ms/step - accuracy: 0.9375 - loss: 0.1855 - val_accuracy: 0.9375 - val_loss: 1.9878
Epoch 37/100
10/10 - 1s - 114ms/step - accuracy: 0.9342 - loss: 0.2047 - val_accuracy: 1.0000 - val_loss: 1.7495
Epoch 38/100
10/10 - 1s - 113ms/step - accuracy: 0.9441 - loss: 0.1502 - val_accuracy: 1.0000 - val_loss: 1.2873
Epoch 39/100
10/10 - 1s - 122ms/step - accuracy: 0.9309 - loss: 0.1838 - val_accuracy: 0.9375 - val_loss: 1.3934
Epoch 40/100
10/10 - 1s - 113ms/step - accuracy: 0.9704 - loss: 0.1375 - val_accuracy: 1.0000 - val_loss: 1.6130
Epoch 41/100
10/10 - 1s - 110ms/step - accuracy: 0.9572 - loss: 0.1405 - val_accuracy: 1.0000 - val_loss: 1.3061
Epoch 42/100
10/10 - 1s - 112ms/step - accuracy: 0.9638 - loss: 0.1327 - val_accuracy: 1.0000 - val_loss: 0.9775
Epoch 43/100
10/10 - 1s - 114ms/step - accuracy: 0.9507 - loss: 0.1585 - val_accuracy: 1.0000 - val_loss: 1.0045
Epoch 44/100
10/10 - 1s - 110ms/step - accuracy: 0.9441 - loss: 0.1936 - val_accuracy: 1.0000 - val_loss: 1.1656
Epoch 45/100
10/10 - 1s - 110ms/step - accuracy: 0.9671 - loss: 0.0900 - val_accuracy: 1.0000 - val_loss: 0.9886
Epoch 46/100
10/10 - 1s - 111ms/step - accuracy: 0.9605 - loss: 0.1151 - val_accuracy: 1.0000 - val_loss: 0.6414
Epoch 47/100
10/10 - 1s - 115ms/step - accuracy: 0.9836 - loss: 0.0554 - val_accuracy: 0.8750 - val_loss: 0.4818
Epoch 48/100
10/10 - 1s - 112ms/step - accuracy: 0.9737 - loss: 0.0706 - val_accuracy: 0.9375 - val_loss: 0.3388
Epoch 49/100
10/10 - 1s - 109ms/step - accuracy: 0.9770 - loss: 0.0982 - val_accuracy: 1.0000 - val_loss: 0.4002
Epoch 50/100
10/10 - 1s - 113ms/step - accuracy: 0.9803 - loss: 0.0433 - val_accuracy: 1.0000 - val_loss: 0.3426
Epoch 51/100
10/10 - 1s - 118ms/step - accuracy: 0.9803 - loss: 0.0521 - val_accuracy: 1.0000 - val_loss: 0.2244
Epoch 52/100
10/10 - 1s - 116ms/step - accuracy: 0.9737 - loss: 0.0746 - val_accuracy: 1.0000 - val_loss: 0.1702
Epoch 53/100
10/10 - 1s - 115ms/step - accuracy: 0.9901 - loss: 0.0494 - val_accuracy: 1.0000 - val_loss: 0.1517
Epoch 54/100
10/10 - 1s - 117ms/step - accuracy: 0.9901 - loss: 0.0494 - val_accuracy: 1.0000 - val_loss: 0.1012
Epoch 55/100
10/10 - 1s - 120ms/step - accuracy: 0.9803 - loss: 0.0634 - val_accuracy: 1.0000 - val_loss: 0.0553
Epoch 56/100
10/10 - 1s - 118ms/step - accuracy: 0.9704 - loss: 0.0828 - val_accuracy: 1.0000 - val_loss: 0.0694
Epoch 57/100
10/10 - 1s - 116ms/step - accuracy: 0.9770 - loss: 0.0874 - val_accuracy: 1.0000 - val_loss: 0.1051
Epoch 58/100
10/10 - 4s - 424ms/step - accuracy: 0.9671 - loss: 0.0928 - val_accuracy: 1.0000 - val_loss: 0.1174
Epoch 59/100
10/10 - 1s - 112ms/step - accuracy: 0.9704 - loss: 0.0693 - val_accuracy: 1.0000 - val_loss: 0.1299
Epoch 60/100
10/10 - 1s - 112ms/step - accuracy: 0.9836 - loss: 0.0804 - val_accuracy: 0.9375 - val_loss: 0.1798
Epoch 61/100
10/10 - 1s - 114ms/step - accuracy: 0.9474 - loss: 0.1542 - val_accuracy: 1.0000 - val_loss: 0.0511
Epoch 62/100
10/10 - 1s - 115ms/step - accuracy: 0.9572 - loss: 0.1051 - val_accuracy: 1.0000 - val_loss: 0.0241
Epoch 63/100
10/10 - 1s - 112ms/step - accuracy: 0.9507 - loss: 0.1264 - val_accuracy: 1.0000 - val_loss: 0.0265
Epoch 64/100
10/10 - 1s - 113ms/step - accuracy: 0.9507 - loss: 0.2049 - val_accuracy: 1.0000 - val_loss: 0.0351
Epoch 65/100
10/10 - 1s - 112ms/step - accuracy: 0.9638 - loss: 0.1301 - val_accuracy: 1.0000 - val_loss: 0.0739
Epoch 66/100
10/10 - 1s - 113ms/step - accuracy: 0.9868 - loss: 0.0624 - val_accuracy: 1.0000 - val_loss: 0.0344
Epoch 67/100
10/10 - 1s - 112ms/step - accuracy: 0.9737 - loss: 0.0675 - val_accuracy: 1.0000 - val_loss: 0.0054
Epoch 68/100
10/10 - 1s - 110ms/step - accuracy: 0.9671 - loss: 0.1009 - val_accuracy: 1.0000 - val_loss: 0.0057
Epoch 69/100
10/10 - 1s - 110ms/step - accuracy: 0.9704 - loss: 0.0923 - val_accuracy: 1.0000 - val_loss: 0.0132
Epoch 70/100
10/10 - 1s - 109ms/step - accuracy: 0.9770 - loss: 0.0810 - val_accuracy: 1.0000 - val_loss: 0.0541
Epoch 71/100
10/10 - 1s - 115ms/step - accuracy: 0.9474 - loss: 0.1472 - val_accuracy: 1.0000 - val_loss: 0.0385
Epoch 72/100
10/10 - 1s - 118ms/step - accuracy: 0.9770 - loss: 0.0923 - val_accuracy: 1.0000 - val_loss: 0.0138
Epoch 73/100
10/10 - 1s - 126ms/step - accuracy: 0.9704 - loss: 0.0910 - val_accuracy: 1.0000 - val_loss: 0.0211
Epoch 74/100
10/10 - 1s - 120ms/step - accuracy: 0.9704 - loss: 0.0801 - val_accuracy: 1.0000 - val_loss: 0.0029
Epoch 75/100
10/10 - 1s - 114ms/step - accuracy: 0.9901 - loss: 0.0472 - val_accuracy: 1.0000 - val_loss: 4.4790e-04
Epoch 76/100
10/10 - 1s - 109ms/step - accuracy: 0.9704 - loss: 0.0788 - val_accuracy: 1.0000 - val_loss: 0.0059
Epoch 77/100
10/10 - 1s - 107ms/step - accuracy: 0.9671 - loss: 0.0848 - val_accuracy: 1.0000 - val_loss: 0.0083
Epoch 78/100
10/10 - 1s - 109ms/step - accuracy: 0.9803 - loss: 0.0606 - val_accuracy: 1.0000 - val_loss: 6.5273e-04
Epoch 79/100
10/10 - 1s - 109ms/step - accuracy: 0.9671 - loss: 0.1323 - val_accuracy: 1.0000 - val_loss: 1.6888e-04
Epoch 80/100
10/10 - 1s - 113ms/step - accuracy: 0.9737 - loss: 0.0758 - val_accuracy: 1.0000 - val_loss: 0.0014
Epoch 81/100
10/10 - 1s - 109ms/step - accuracy: 0.9704 - loss: 0.1115 - val_accuracy: 1.0000 - val_loss: 0.0137
Epoch 82/100
10/10 - 1s - 109ms/step - accuracy: 0.9605 - loss: 0.1072 - val_accuracy: 1.0000 - val_loss: 0.0216
Epoch 83/100
10/10 - 1s - 110ms/step - accuracy: 0.9605 - loss: 0.1289 - val_accuracy: 1.0000 - val_loss: 0.0178
Epoch 84/100
10/10 - 4s - 413ms/step - accuracy: 0.9704 - loss: 0.0818 - val_accuracy: 1.0000 - val_loss: 0.0020
Epoch 85/100
10/10 - 1s - 111ms/step - accuracy: 0.9836 - loss: 0.0323 - val_accuracy: 1.0000 - val_loss: 0.0018
Epoch 86/100
10/10 - 1s - 111ms/step - accuracy: 0.9671 - loss: 0.1149 - val_accuracy: 1.0000 - val_loss: 3.7641e-04
Epoch 87/100
10/10 - 1s - 115ms/step - accuracy: 0.9901 - loss: 0.0515 - val_accuracy: 1.0000 - val_loss: 0.0047
Epoch 88/100
10/10 - 1s - 111ms/step - accuracy: 0.9737 - loss: 0.0804 - val_accuracy: 1.0000 - val_loss: 0.0039
Epoch 89/100
10/10 - 1s - 114ms/step - accuracy: 0.9934 - loss: 0.0373 - val_accuracy: 1.0000 - val_loss: 0.0024

Evaluate the score

score = cnn_model.evaluate( np.array(x_test), np.array(y_test_categorical), verbose=0)

print(f'test loss: {score[0]*100:.4f}')
print(f'test accuracy: {score[1]*100:.2f} %')
test loss: 22.6138
test accuracy: 92.50 %
# Plot accuracy and loss curves
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

Summary

To summarize all the steps implemented to build a CNN Model for the ORL database of faces are provided in order below.

  1. Data Loading and Preprocessing:
    • Images are loaded and normalized in the range [0, 1].
    • The initial visualization ensures the images and labels are loaded correctly.
  2. Training, Test, and Validation data splitting:
    • The dataset is split into training and validation sets using train_test_split() from sklearn.model_selection.
    • The split is 80% for training and 20% for test. This splitting helps in evaluating model performance while training.
    • Another split of 5% of the data is made for validation purposes. This is because the number of images in the dataset overall is very low.
  3. Data Shape Check:
    • Reshaping is performed to ensure all images have the same dimensions (112x92x1) suitable for input to the CNN model.
    • Labels are converted into categorical format using to_categorical() from keras.utils.
  4. CNN Model Architecture:
    • The defined CNN model comprises two Conv2D layers followed by MaxPooling layers.
    • Dense layers with ReLU activations are included, along with dropout layers for regularization to prevent overfitting.
    • Batch Normalization layers are also added for better convergence during training.
  5. Model Compilation and Training:
    • The model is compiled with ‘adam’ optimizer and ‘categorical_crossentropy’ loss.
    • Model training (fit()) is performed using the training and validation data.
    • Training history is stored to analyze the model’s performance over epochs.
  6. Model Evaluation:
    • The trained model is evaluated on the validation set to calculate loss and accuracy.
    • Finally, accuracy and loss curves are plotted to visualize the model’s training and validation performance.