Image Classification in Python using CNN
Hey everyone, today’s topic is image classification in python. Humans generally recognize images when they see and it doesn’t require any intensive training to identify a building or a car.
What if we want a computer to recognize an image? That is image classification and it is useful in computer vision and many other areas
How to classify images?
It’s not an easy task for a computer to recognize images. We need to train it extensively. I mean, the machine needs to analyze a huge amount of images to recognize a single image. Let’s imagine a dataset with images of dogs and cats in separate folders. First, we need to build the model and the model we use here is Convolutional Neural Networks.
CNN is a feed-forward neural network and it assigns weights to images scanned or trained and used to identify one image from the other and before you proceed to learn, know-
- saturation, RGB intensity, sharpness, exposure, etc of images
Classification using CNN model
These are the four steps we will go through
- Step 1: Convert image to B/W
- Step 2: Convolution of image i.e, convert image to 0’s and 1’s matrix.
- Step 3: Max Pooling – take the most common features and repeat it on every image
- Step 4: Full connection
This code builds our model.
classifier = Sequential() #step1 convolution #choose number of filters no of filter = no of feature detector #choose an activation fucntion after that #use rectifier activation #remove negative pixels activation function is used to get non linearity since images are not linear classifier.add(Convolution2D(32,3,3,input_shape=(128,128,4),activation= 'relu')) #max pooling #reduce the size of the feature maps just to reduce the number of nodes classifier.add(MaxPooling2D(pool_size = (2,2))) #flattening is crucial #put all the maps in the pooling layer into the single vector #will we lose the spatial structure? no because the convolution and feature maps preserve the spatial #structure classifier.add(Flatten()) #step 4 full connections #number of hidden layer nodes classifier.add(Dense(output_dim = 256 , activation = 'relu')) classifier.add(Dense(output_dim = 1 , activation = 'sigmoid')) #compiling the cnn classifier.compile(optimizer = 'adam',loss = 'binary_crossentropy',metrics =['accuracy'])
Just take a look at the above code. A sequential classifier classifies our data based on layers of images and pass the sequential classifier to be converted into a 2d matrix i.e., image of black and white. Remember, any image is a 3D array (RGB). The dog or cat image is passed to further feature capturing, it means we are capturing the most identical and maximum occurring features in images even though they are rotated or upside down. A feature map looks for matching 0’s and 1’s in an image in a nutshell.
Next, we proceed to flatten the image more i.e, 2D to 1D where all the features are stored in a 1D vector and optimize using the Adam optimizer. Now what? Just try the model on the folder which has two images of cat and a dog for testing and lo! we have the output.
import numpy as np from keras.preprocessing import image as img classifier = clf out_img = img.load_img('dataset/single_prediction/cat_or_dog_2.jpg', target_size = (128,128)) out_img = img.img_to_array(out_img) out_img = np.expand_dims(out_img, axis = 0) output = clf.predict(out_img) training_set.class_indices if output[0][0] == 1: pred = 'dog_bow' else: pred = 'cat_meow' print(pred)
Output: cat
Thank you, Meow! If you have any queries ask me in the comments.
haie how about Classification of Papaya Disease Using CNN