Classifying Images¶

A followed tutorial (with tweaks) from: https://www.youtube.com/watch?v=bemDFpNooA8

Using the Fashion_MNIST dataset built in to the tensorflow library

Import Libraries and Set Up Test and Training Data¶

In [1]:
import tensorflow as tf
In [2]:
mnist = tf.keras.datasets.fashion_mnist
In [3]:
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
In [4]:
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()

View the First 25 Training Images¶

In [5]:
import matplotlib.pyplot as plt

plt.figure(figsize=(10,10))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(training_images[i], cmap=plt.cm.binary)
    plt.xlabel(class_names[training_labels[i]])
plt.show()

Normalise the Data (so each value/pixel is 0-1 rather than 0-255)¶

In [6]:
training_images  = training_images / 255.0
test_images = test_images / 255.0

Design Model¶

(Flatten... turns the 2 dimension sets into a 1D array) (Softmax... chose the biggest value in the 10 output values, hence classifying the test image)

In [7]:
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), 
                                    tf.keras.layers.Dense(128, activation=tf.nn.relu), 
                                    tf.keras.layers.Dense(10, activation=tf.nn.softmax)])

Build Model¶

In [8]:
model.compile(optimizer = 'adam',
              loss = 'sparse_categorical_crossentropy',
              metrics=['accuracy'])

Fit Model¶

In [9]:
model.fit(training_images, training_labels, epochs=5)
Epoch 1/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 702us/step - accuracy: 0.7859 - loss: 0.6211
Epoch 2/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 691us/step - accuracy: 0.8625 - loss: 0.3823
Epoch 3/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 696us/step - accuracy: 0.8766 - loss: 0.3414
Epoch 4/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 708us/step - accuracy: 0.8836 - loss: 0.3159
Epoch 5/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 702us/step - accuracy: 0.8879 - loss: 0.3004
Out[9]:
<keras.src.callbacks.history.History at 0x16ab01490>

Evaluate Model¶

(Accuracy about 87.8%)

In [10]:
model.evaluate(test_images, test_labels)
313/313 ━━━━━━━━━━━━━━━━━━━━ 0s 310us/step - accuracy: 0.8659 - loss: 0.3671
Out[10]:
[0.3643404543399811, 0.8668000102043152]

Choosing a Random Set of 25 images in the Test Data and Using the Model To Classify Them¶

In [11]:
import numpy as np
import random


random_indices = random.sample(range(test_images.shape[0]), 25)

predictions = model.predict(test_images[random_indices])

plt.figure(figsize=(10,10))
for i in range(25):
    plt.subplot(5,5,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(test_images[random_indices[i]], cmap=plt.cm.binary)
    predicted_label = np.argmax(predictions[i])
    true_label = test_labels[random_indices[i]]
    if predicted_label == true_label:
        color = 'green'
    else:
        color = 'red'
    plt.xlabel("{} ({})".format(class_names[predicted_label], class_names[true_label]), color=color)
plt.show()
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 15ms/step