Choosing the right inference framework for real-time object detection applications became significantly challenging, especially when models should run on low-powered devices. In this article you will understand how to choose the best inference detector for your needs, and discover the huge performance gain it can give you.
Usually, we tend to focus on light-weight model architectures when we aim to deploy models on CPU or mobile devices, while neglecting the research for a fast inference engine.
During my research on fast inference on CPU devices I have tested various frameworks that offer a stable python API. Today will focus on Onnxruntime, OpenCV DNN and Darknet frameworks, and measure them in terms of performance (running-time) and accuracy.
#object-detection #yolov3 #onnx #computer-vision #opencv