ML | VGG-16 implementation in Keras

This article is about the VGG-16 model for large-scale image classification. VGG-16 is a convolutional neural network architecture that was trained on the Image Net dataset with over 14 million images. It was submitted to the ILSVRC 2014 Competition. The hyperparameter components of VGG-16 are uniform throughout the network, which is makes this architecture unique and foremost. Don’t worry about the technical terms if you are not familiar with convolutional neural networks or CNNs.

At the end of this article, you will be able to implement this model on your system and use it for your task of image classification.

Implementation of VGG-16 with Keras

Firstly, make sure that you have Keras installed on your system. If not, follow the steps mentioned here. To check whether it is successfully installed or not, use the following command in your terminal or command prompt. The latest version of Keras is 2.2.4, as of the date of this article.

python -c "import keras; print(keras.__version__)"

Transfer Learning:

Since it is computationally expensive to train such deep neural network models from scratch, we use the concept of transfer learning. Transfer learning is when we use the pre-trained weights of the model and use our input to perform our task. It is even possible to change the configurations of the architecture as per our requirement if we have to. Without further ado, let’s use deep learning for image classification.


Firstly, let’s import all the necessary libraries

import numpy as np
import math
import scipy.misc
from matplotlib.pyplot import imshow
from keras.applications import vgg16
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input
from keras.applications.imagenet_utils import decode_predictions

We get the following output by executing this code:

Using Theano backend.

To run the model, we call it from keras.applications and visualize all the building blocks using model.summary().

model = vgg16.VGG16(include_top = True, weights = "imagenet")

Wait till the model downloads all the required pre-trained weights. Until then take a break or read more about CNNs. After it’s done, you’ll see a long summary of all the layers of the network. Let’s input our image to see if it works.

img_path = "/home/Desktop/tiger.jpg" #image and its path are unique for each users
my_image = scipy.misc.imread(img_path)

I have really tested the model by using a white tiger image. Anyway, let’s preprocess our image. As VGG-16 takes 224×224 pixel images in the form of an RGB array, we have done the following:

imge = image.load_img(img_path, target_size=(224, 224))
img_arr = image.img_to_array(imge)
img_arr = np.expand_dims(img_arr, axis=0)
img_arr = preprocess_input(img_arr)
print("Input image shape:", img_arr.shape)


Input image shape: (1, 224, 224, 3)

Lastly, we get the prediction from our model and voila!

preds = model.predict(img_arr)
list_pred = list(decode_predictions(preds)[0][0])
list_pred[2] = math.floor(float(list_pred[2])*100)
print("I, the VGG16 network, can say with {}% surety that the given image is {}".format(list_pred[2],list_pred[1]))

Let’s see what our network has to say about our image:

I, the VGG16 network, can say with 76% surety that the given image is tiger

I’m certain you’ll get way better results on your images.

VGG-16 has an alternate version, namely VGG-19. You can use the above mentioned steps to implement VGG-19 as well. We encourage you to apply it to your classification problem and let us know if you find this article useful.

Please refer these articles to know more about CNNs:


Leave a Reply

Your email address will not be published. Required fields are marked *