Last Updated on 2021-06-02 by Clay
Cifar-10 is dataset like "MNIST", it have 60000 pictures, 10 types. One of types have 6000 pictures. 50000 training data and 10000 test data.
But the pictures in Cifar-10 are 32 x 32 "RGB" mode, the process of training the classifier is more complicated. We can see the world record is 96.53%, significantify lower than MNIST.
But don't worry. This is not the hardest toy data set. For example, Cifar-100 even more difficult.
Digression, let's look at the code now!
Prepare
If you have no any GPU can accelerate training your model, maybe you can use the "Google Colab". More information you can read this article: How to use the free GPU from Google Colab
Program
import os import numpy as np import matplotlib.pyplot as plt import pandas as pd from keras.models import Sequential, load_model from keras.datasets import cifar10 from keras.utils import np_utils,plot_model from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
First, import all the packages we need.
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data() x_train = X_train.astype('float32')/255 x_test = X_test.astype('float32')/255 y_train = np_utils.to_categorical(Y_train) y_test = np_utils.to_categorical(Y_test)
Prepare the Cifar-10 dataset. We can use the dataset collected by Keras.
The pre-processing is similar to MNIST and we needs to split our data to "training data" and "test data". In addition, we have to divide it into "picture data" and "labels".
("x" is picture data, and "y" is picture label)
model = Sequential() model.add(Conv2D(filters=64, kernel_size=3, input_shape=(32, 32, 3), activation='relu', padding='same')) model.add(Conv2D(filters=64, kernel_size=3, input_shape=(32, 32, 3), activation='relu', padding='same')) model.add(MaxPool2D(pool_size=2)) model.add(Conv2D(filters=128, kernel_size=3, activation='relu', padding='same')) model.add(Conv2D(filters=128, kernel_size=3, activation='relu', padding='same')) model.add(MaxPool2D(pool_size=2)) model.add(Conv2D(filters=128, kernel_size=3, activation='relu', padding='same')) model.add(Conv2D(filters=128, kernel_size=3, activation='relu', padding='same')) model.add(MaxPool2D(pool_size=2)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(rate=0.25)) model.add(Dense(10, activation='softmax'))
We used a multi-layer CNN when we building the model. You can try different model architectures, and you can pay attention to check the "Accuracy" result have improved or not.
print(model.summary())
We can print the overview of the model. It look like this:
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, batch_size=64, verbose=1)
And then we start to train our model.
loss, accuracy = model.evaluate(x_test, y_test) print('Test:') print('Loss:', loss) print('Accuracy:', accuracy)
Show the model accuracy:
10000/10000 [==============================] - 3s 259us/step
Test:
Loss: 0.8153500760555268
Accuracy: 0.7908
It is normal that my accuracy is different from yours. Different start point will find different local optimal.
References
- https://keras.io/examples/cifar10_cnn/
- https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py
- https://machinelearningmastery.com/how-to-develop-a-cnn-from-scratch-for-cifar-10-photo-classification/