The object detection space continues to move quickly. No more than two months ago, the Google Brain team released EfficientDet for object detection, challenging YOLOv3 as the premier model for (near) realtime object detection, and pushing the boundaries of what is possible in object detection. We wrote a series of posts comparing YOLOv3 and EfficientDettraining YOLOv3 on custom dataand training EfficientDet on custom data, and we’ve found impressive results.

**See here for our tutorial on how to train YOLOv4 on your custom dataset.

And now YOLOv4 has been released showing an increase in COCO Average Precision (AP) and Frames Per Second (FPS) by 10 percent and 12 percent, respectively. In this post, we will see how the authors made this breakthrough by diving into the specifics of the data augmentation techniques used in YOLOv4.

The founder of Mosaic Augmentation, Glen Jocher has released a new YOLO training framework titled YOLOv5\. You may also want to see our post on YOLOv5 vs YOLOv4 This post will explain some of the pros of the new YOLOv5 framework.

YOLOv5 Breakdown

(Citation)

The importance of data augmentation for computer vision is not new! See our post from January explaining how important image preprocessing and augmentation is for computer vision.

What is the Bag of Freebies in YOLOv4?

The authors of YOLOv4 include a series of contributions in their paper titled a “bag of freebies.” These are a series of steps that can be taken to improve the model’s performance without increasing latency at inference time. Because they cannot affect the model’s inference time, most of these make improvements in the data management and data augmentation of the training pipeline. These techniques improve and scale up the training set to expose the model to scenarios that would have otherwise been unseen. Data augmentation in computer vision is key to getting the most out of your dataset, and state of the art research continues to validate this assumption.

Data Augmentation in Computer Vision

Image augmentation creates new training examples out of existing training data. It’s impossible to truly capture an image for every real-world scenario our model may be tasked to see in inference. Thus, adjusting existing training data to generalize to other situations allows the model to learn from a wider array of situations.

The authors of YOLOv4 cite a number of techniques that ultimately inspired the inclusion of their bag of freebies. We provide an overview below.

Distortion

**Photometric Distortion — **This includes changing the brightness, contrast, saturation, and noise in an image. (For example, written on blur data augmentation in computer vision.)

Adjusting brightness on our platform

**Geometric Distortion — **This includes random scaling, cropping, flipping, and rotating. These types of augmentation can be particularly tricky as the bounding boxes are also affected and must be updated. (As an example, we’ve written on how to use random cropping data augmentation in computer vision.)

Flipping images on our platform

Those two methods were both pixel adjustments, meaning that the original image could easily be recovered with a series of transformations.

Image Occlusion

**Random Erase — **This is a data augmentation technique that replaces regions of the image with random values, or the mean pixel value of training set. Typically, it is implemented with varying proportion of image erased and aspect ratio of erased area. Functionally, this becomes a regularization technique, which prevents our model from memorizing the training data and overfitting.

#data-augmentation #data-preprocessing #object-detection #computer-vision #data-preparation #data analysis

Data Augmentation in YOLOv4
69.70 GEEK