Image Vector Representation for ML using OpenCV

Learn how to use OpenCV, a popular computer vision library, to convert images into vector representations for machine learning applications.

Tutorial Overview

This tutorial is divided into four parts; they are:

  • What are the Advantages of Using HOG or BoW for Image Vector Representation?
  • The Histogram of Oriented Gradients Technique
  • The Bag-of-Words Technique
  • Putting the Techniques to Test

What are the Advantages of Using HOG or BoW for Image Vector Representation?

When working with machine learning algorithms, the image data typically undergoes a data pre-processing step, which is structured so that the machine learning algorithms can work with it. 

In OpenCV, for instance, the ml module requires that the image data is fed into the machine learning algorithms in the form of feature vectors of equal length. 

Each training sample is a vector of values (in Computer Vision it’s sometimes referred to as feature vector). Usually all the vectors have the same number of components (features); OpenCV ml module assumes that.

OpenCV, 2023. 

One way of structuring the image data is to flatten it out into a one-dimensional vector, where the vector’s length would equal the number of pixels in the image. For example, a 20×20 pixel image would result in a one-dimensional vector of length 400 pixels. This one-dimensional vector serves as the feature set fed into the machine learning algorithm, where the intensity value of each pixel represents every feature.

However, while this is the simplest feature set we can create, it is not the most effective one, especially when working with larger images that will result in too many input features to be processed effectively by a machine learning algorithm. 

This can dramatically impact the performance of machine learning algorithms fit on data with many input features, generally referred to as the “curse of dimensionality.”

– Introduction to Dimensionality Reduction for Machine Learning, 2020.

Rather, we want to reduce the number of input features that represent each image so that, in turn, the machine learning algorithm can generalize better to the input data. In more technical words, it is desirable to perform dimensionality reduction that transforms the image data from a high-dimensional space to a lower one. 

One way of doing so is to apply feature extraction and representation techniques, such as the Histogram of Oriented Gradients (HOG) or the Bag-of-Words (BoW), to represent an image in a more compact manner and, in turn, reduce the redundancy in the feature set and the computational requirements to process it. 

Another advantage to converting the image data into a feature vector using the aforementioned techniques is that the vector representation of the image becomes more robust to variations in illumination, scale, or viewpoint.

In the following sections, we will explore using the HOG and BoW techniques for image vector representation.

The Histogram of Oriented Gradients Technique

The HOG is a feature extraction technique that aims to represent the local shape and appearance of objects inside the image space by a distribution of their edge directions. 

In a nutshell, the HOG technique performs the following steps when applied to an image:

  1.  
    1. Computes the image gradients in horizontal and vertical directions using, for example, a Prewitt operator. The magnitude and direction of the gradient are then computed for every pixel in the image. 
  2.  
    1. Divide the image into non-overlapping cells of fixed size and compute a histogram of gradients for each cell. This histogram representation of every image cell is more compact and more robust to noise. The cell size is typically set according to the size of the image features we want to capture.  
  3.  
    1. Concatenates the histograms over blocks of cells into one-dimensional feature vectors and normalizes them. This makes the descriptor more robust to lighting variations.
  4.  
    1. Finally, it concatenates all normalized feature vectors representing the blocks of cells to obtain a final feature vector representation of the entire image. 

The HOG implementation in OpenCV takes several input arguments that correspond to the aforementioned steps, including:

  •  
    • The window size (winSize) that corresponds to the minimum object size to be detected. 
    • The cell size (cellSize) typically captures the size of the image features of interest. 
    • The block size (blockSize) tackles the problem of variation in illumination. 
    • The block stride (blockStride) controls how much neighboring blocks overlap. 
    • The number of histogram bins (nbins) to capture gradients between 0 and 180 degrees. 

Let’s create a function, hog_descriptors()that computes feature vectors for a set of images using the HOG technique:


def hog_descriptors(imgs):
    # Create a list to store the HOG feature vectors
    hog_features = []
 
    # Set parameter values for the HOG descriptor based on the image data in use
    winSize = (20, 20)
    blockSize = (10, 10)
    blockStride = (5, 5)
    cellSize = (10, 10)
    nbins = 9
 
    # Set the remaining parameters to their default values
    derivAperture = 1
    winSigma = -1.
    histogramNormType = 0
    L2HysThreshold = 0.2
    gammaCorrection = False
    nlevels = 64
 
    # Create a HOG descriptor
    hog = HOGDescriptor(winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma,
                        histogramNormType, L2HysThreshold, gammaCorrection, nlevels)
 
    # Compute HOG descriptors for the input images and append the feature vectors to the list
    for img in imgs:
        hist = hog.compute(img.reshape(20, 20).astype(uint8))
        hog_features.append(hist)
 
    return array(hog_features)

Note: It is important to note that how the images are being reshaped here corresponds to the image dataset that will be later used in this tutorial. If you use a different dataset, do not forget to tweak this part of the code accordingly. 

The Bag-of-Words Technique

The BoW technique has been introduced in this tutorial as applied to modeling text with machine learning algorithms. 

Nonetheless, this technique can also be applied to computer vision, where images are treated as visual words from which features can be extracted. For this reason, when applied to computer vision, the BoW technique is often called the Bag-of-Visual-Words technique. 

In a nutshell, the BoW technique performs the following steps when applied to an image:

  1.  
    1. Extracts feature descriptors from an image using algorithms such as the Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust Features (SURF). Ideally, the extracted features should be invariant to intensity, scale, rotation, and affine variations. 
  2.  
    1. Generates codewords from the feature descriptors where each codeword is representative of similar image patches. One way of generating these codewords is to use k-means clustering to aggregate similar descriptors into clusters, where the centers of the clusters would then represent the visual words, while the number of clusters represents the vocabulary size. 
  3.  
    1. Maps the feature descriptors to the nearest cluster in the vocabulary, essentially assigning a codeword to each feature descriptor. 
  4.  
    1. Bins the codewords into a histogram and uses this histogram as a feature vector representation of the image. 

Let’s create a function, bow_descriptors(), that applies the BoW technique using SIFT to a set of images:

Python
def bow_descriptors(imgs):
    # Create a SIFT descriptor
    sift = SIFT_create()
 
    # Create a BoW descriptor
    # The number of clusters equal to 50 (analogous to the vocabulary size) has been chosen empirically
    bow_trainer = BOWKMeansTrainer(50)
    bow_extractor = BOWImgDescriptorExtractor(sift, BFMatcher(NORM_L2))
 
    for img in imgs:
        # Reshape each RGB image and convert it to grayscale
        img = reshape(img, (32, 32, 3), 'F')
        img = cvtColor(img, COLOR_RGB2GRAY).transpose()
 
        # Extract the SIFT descriptors
        _, descriptors = sift.detectAndCompute(img, None)
 
        # Add the SIFT descriptors to the BoW vocabulary trainer
        if descriptors is not None:
            bow_trainer.add(descriptors)
 
    # Perform k-means clustering and return the vocabulary
    voc = bow_trainer.cluster()
 
    # Assign the vocabulary to the BoW descriptor extractor
    bow_extractor.setVocabulary(voc)
 
    # Create a list to store the BoW feature vectors
    bow_features = []
 
    for img in imgs:
        # Reshape each RGB image and convert it to grayscale
        img = reshape(img, (32, 32, 3), 'F')
        img = cvtColor(img, COLOR_RGB2GRAY).transpose()
 
        # Compute the BoW feature vector
        hist = bow_extractor.compute(img, sift.detect(img))
 
        # Append the feature vectors to the list
        if hist is not None:
            bow_features.append(hist[0])
 
    return array(bow_features)

Note: It is important to note that how the images are being reshaped here corresponds to the image dataset that will be later used in this tutorial. If you use a different dataset, do not forget to tweak this part of the code accordingly. 

Putting the Techniques to Test

There isn’t necessarily a single best technique for all cases, and the choice of technique for the image data you are working with often requires controlled experiments. 

In this tutorial, as an example, we will apply the HOG technique to the digits dataset that comes with OpenCV, and the BoW technique to images from the CIFAR-10 dataset. For this tutorial, we will only be considering a subset of images from these two datasets to reduce the required processing time. Nonetheless, the same code can be easily extended to the full datasets. 

We will start by loading the datasets we will be working with. Recall that we had seen how to extract the images from each dataset in this tutorial. The digits_dataset and the cifar_dataset are Python scripts that I have created and which contain the code for loading the digits and the CIFAR-10 datasets, respectively:


from digits_dataset import split_images, split_data
from cifar_dataset import load_images
 
# Load the digits image
img, sub_imgs = split_images('Images/digits.png', 20)
 
# Obtain a dataset from the digits image
digits_imgs, _, _, _ = split_data(20, sub_imgs, 0.8)
 
# Load a batch of images from the CIFAR dataset
cifar_imgs = load_images('Images/cifar-10-batches-py/data_batch_1')
 
# Consider only a subset of images
digits_subset = digits_imgs[0:100, :]
cifar_subset = cifar_imgs[0:100, :]

We may then proceed to pass on the datasets to the hog_descriptors() and the bow_descriptors() functions that we have created earlier in this tutorial:


digits_hog = hog_descriptors(digits_subset)
print('Size of HOG feature vectors:', digits_hog.shape)
 
cifar_bow = bow_descriptors(cifar_subset)
print('Size of BoW feature vectors:', cifar_bow.shape)

The complete code listing looks as follows:


from cv2 import (imshow, waitKey, HOGDescriptor, SIFT_create, BOWKMeansTrainer,
                 BOWImgDescriptorExtractor, BFMatcher, NORM_L2, cvtColor, COLOR_RGB2GRAY)
from digits_dataset import split_images, split_data
from cifar_dataset import load_images
from numpy import uint8, array, reshape
 
# Load the digits image
img, sub_imgs = split_images('Images/digits.png', 20)
 
# Obtain a dataset from the digits image
digits_imgs, _, _, _ = split_data(20, sub_imgs, 0.8)
 
# Load a batch of images from the CIFAR dataset
cifar_imgs = load_images('Images/cifar-10-batches-py/data_batch_1')
 
# Consider only a subset of images
digits_subset = digits_imgs[0:100, :]
cifar_subset = cifar_imgs[0:100, :]
 
def hog_descriptors(imgs):
    # Create a list to store the HOG feature vectors
    hog_features = []
 
    # Set parameter values for the HOG descriptor based on the image data in use
    winSize = (20, 20)
    blockSize = (10, 10)
    blockStride = (5, 5)
    cellSize = (10, 10)
    nbins = 9
 
    # Set the remaining parameters to their default values
    derivAperture = 1
    winSigma = -1.
    histogramNormType = 0
    L2HysThreshold = 0.2
    gammaCorrection = False
    nlevels = 64
 
    # Create a HOG descriptor
    hog = HOGDescriptor(winSize, blockSize, blockStride, cellSize, nbins, derivAperture, winSigma,
                        histogramNormType, L2HysThreshold, gammaCorrection, nlevels)
 
    # Compute HOG descriptors for the input images and append the feature vectors to the list
    for img in imgs:
        hist = hog.compute(img.reshape(20, 20).astype(uint8))
        hog_features.append(hist)
 
    return array(hog_features)
 
 
def bow_descriptors(imgs):
    # Create a SIFT descriptor
    sift = SIFT_create()
 
    # Create a BoW descriptor
    # The number of clusters equal to 50 (analogous to the vocabulary size) has been chosen empirically
    bow_trainer = BOWKMeansTrainer(50)
    bow_extractor = BOWImgDescriptorExtractor(sift, BFMatcher(NORM_L2))
 
    for img in imgs:
        # Reshape each RGB image and convert it to grayscale
        img = reshape(img, (32, 32, 3), 'F')
        img = cvtColor(img, COLOR_RGB2GRAY).transpose()
 
        # Extract the SIFT descriptors
        _, descriptors = sift.detectAndCompute(img, None)
 
        # Add the SIFT descriptors to the BoW vocabulary trainer
        if descriptors is not None:
            bow_trainer.add(descriptors)
 
    # Perform k-means clustering and return the vocabulary
    voc = bow_trainer.cluster()
 
    # Assign the vocabulary to the BoW descriptor extractor
    bow_extractor.setVocabulary(voc)
 
    # Create a list to store the BoW feature vectors
    bow_features = []
 
    for img in imgs:
        # Reshape each RGB image and convert it to grayscale
        img = reshape(img, (32, 32, 3), 'F')
        img = cvtColor(img, COLOR_RGB2GRAY).transpose()
 
        # Compute the BoW feature vector
        hist = bow_extractor.compute(img, sift.detect(img))
 
        # Append the feature vectors to the list
        if hist is not None:
            bow_features.append(hist[0])
 
    return array(bow_features)
 
 
digits_hog = hog_descriptors(digits_subset)
print('Size of HOG feature vectors:', digits_hog.shape)
 
cifar_bow = bow_descriptors(cifar_subset)
print('Size of BoW feature vectors:', cifar_bow.shape)

The code above returns the following output:

Size of HOG feature vectors:  (100, 81)
Size of BoW feature vectors: (100, 50)

Based on our choice of parameter values, we may see that the HOG technique returns feature vectors of size 1×81 for each image. This means each image is now represented by points in an 81-dimensional space. The BoW technique, on the other hand, returns vectors of size 1×50 for each image, where the vector length has been determined by the number of k-means clusters of choice, which is also analogous to the vocabulary size.  

Hence, we may see that, instead of simply flattening out each image into a one-dimensional vector, we have managed to represent each image more compactly by applying the HOG and BoW techniques. 

Our next step will be to see how we can exploit this data using different machine learning algorithms. 

#python  #opencv 

Image Vector Representation for ML using OpenCV
1.05 GEEK