1668016740
YOLOv3 π is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
Documentation
See the YOLOv3 Docs for full documentation on training, testing and deployment.
Quick Start Examples
Install
Python>=3.6.0 is required with all requirements.txt installed including PyTorch>=1.7:
$ git clone https://github.com/ultralytics/yolov3
$ cd yolov3
$ pip install -r requirements.txt
Inference
Inference with YOLOv3 and PyTorch Hub. Models automatically download from the latest YOLOv3 release.
import torch
# Model
model = torch.hub.load('ultralytics/yolov3', 'yolov3') # or yolov3-spp, yolov3-tiny, custom
# Images
img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
Inference with detect.py
detect.py
runs inference on a variety of sources, downloading models automatically from the latest YOLOv3 release and saving results to runs/detect
.
$ python detect.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
TrainingTutorials
Environments
Get started in seconds with our verified environments. Click each icon below for details.
Integrations
Weights and Biases | Roboflow β NEW |
---|---|
Automatically track and visualize all your YOLOv3 training runs in the cloud with Weights & Biases | Label and export your custom datasets directly to YOLOv3 for training with Roboflow |
Why YOLOv5
YOLOv3-P5 640 Figure (click to expand)
Figure Notes (click to expand)
python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt
Model | size (pixels) | mAPval 0.5:0.95 | mAPval 0.5 | Speed CPU b1 (ms) | Speed V100 b1 (ms) | Speed V100 b32 (ms) | params (M) | FLOPs @640 (B) |
---|---|---|---|---|---|---|---|---|
YOLOv5n | 640 | 28.4 | 46.0 | 45 | 6.3 | 0.6 | 1.9 | 4.5 |
YOLOv5s | 640 | 37.2 | 56.0 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
YOLOv5m | 640 | 45.2 | 63.9 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
YOLOv5l | 640 | 48.8 | 67.2 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
YOLOv5x | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
YOLOv5n6 | 1280 | 34.0 | 50.7 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
YOLOv5s6 | 1280 | 44.5 | 63.0 | 385 | 8.2 | 3.6 | 16.8 | 12.6 |
YOLOv5m6 | 1280 | 51.0 | 69.0 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
YOLOv5l6 | 1280 | 53.6 | 71.6 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
YOLOv5x6 + TTA | 1280 1536 | 54.7 55.4 | 72.4 72.3 | 3136 - | 26.2 - | 19.4 - | 140.7 - | 209.8 - |
Table Notes (click to expand)
python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65
python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45
python val.py --data coco.yaml --img 1536 --iou 0.7 --augment
Author: ultralytics
Source Code: https://github.com/ultralytics/yolov3
License: GPL-3.0 license
#machinelearning #deeplearning #ios
1585638769
Learn how to build an Object Tracker using YOLOv3, Deep SORT, and Tensorflow! Run the real-time object tracker on both webcam and video. This guide will show you how to get the necessary code, setup required dependencies and run the tracker.
This repository implements YOLOv3 and Deep SORT in order to perfrom real-time object tracking. Yolov3 is an algorithm that uses deep convolutional neural networks to perform object detection. We can feed these object detections into Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) in order for a real-time object tracker to be created.
# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate tracker-cpu
# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate tracker-gpu
# TensorFlow CPU
pip install -r requirements.txt
# TensorFlow GPU
pip install -r requirements-gpu.txt
# Ubuntu 18.04
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt install nvidia-driver-430
# Windows/Other
https://www.nvidia.com/Download/index.aspx
For Linux: Let's download official yolov3 weights pretrained on COCO dataset.
# yolov3
wget https://pjreddie.com/media/files/yolov3.weights -O weights/yolov3.weights
# yolov3-tiny
wget https://pjreddie.com/media/files/yolov3-tiny.weights -O weights/yolov3-tiny.weights
For Windows: You can download the yolov3 weights by clicking here and yolov3-tiny here then save them to the weights folder.
Learn How To Train Custom YOLOV3 Weights Here: https://www.youtube.com/watch?v=zJDUhGL26iU
Add your custom weights file to weights folder and your custom .names file into data/labels folder.
Load the weights using load_weights.py
script. This will convert the yolov3 weights into TensorFlow .tf model files!
# yolov3
python load_weights.py
# yolov3-tiny
python load_weights.py --weights ./weights/yolov3-tiny.weights --output ./weights/yolov3-tiny.tf --tiny
# yolov3-custom (add --tiny flag if your custom weights were trained for tiny model)
python load_weights.py --weights ./weights/<YOUR CUSTOM WEIGHTS FILE> --output ./weights/yolov3-custom.tf --num_classes <# CLASSES>
After executing one of the above lines, you should see proper .tf files in your weights folder. You are now ready to run object tracker.
Now you can run the object tracker for whichever model you have created, pretrained, tiny, or custom.
# yolov3 on video
python object_tracker.py --video ./data/video/test.mp4 --output ./data/video/results.avi
#yolov3 on webcam
python object_tracker.py --video 0 --output ./data/video/results.avi
#yolov3-tiny
python object_tracker.py --video ./data/video/test.mp4 --output ./data/video/results.avi --weights ./weights/yolov3-tiny.tf --tiny
#yolov3-custom (add --tiny flag if your custom weights were trained for tiny model)
python object_tracker.py --video ./data/video/test.mp4 --output ./data/video/results.avi --weights ./weights/yolov3-custom.tf --num_classes <# CLASSES> --classes ./data/labels/<YOUR CUSTOM .names FILE>
The output flag saves your object tracker results as an avi file for you to watch back. It is not necessary to have the flag if you don't want to save the resulting video.
There is a test video uploaded in the data/video folder called test.mp4. If you followed all the steps properly with the pretrained coco yolov3.weights model then when your run the object tracker wiht the first command above you should see the following.
This is a demo of running the object tracker using the above command for running the object tracker on your webcam.
load_weights.py:
--output: path to output
(default: './weights/yolov3.tf')
--[no]tiny: yolov3 or yolov3-tiny
(default: 'false')
--weights: path to weights file
(default: './weights/yolov3.weights')
--num_classes: number of classes in the model
(default: '80')
(an integer)
object_tracker.py:
--classes: path to classes file
(default: './data/labels/coco.names')
--video: path to input video (use 0 for webcam)
(default: './data/video/test.mp4')
--output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)
(default: None)
--output_format: codec used in VideoWriter when saving video to file
(default: 'XVID)
--[no]tiny: yolov3 or yolov3-tiny
(default: 'false')
--weights: path to weights file
(default: './weights/yolov3.tf')
--num_classes: number of classes in the model
(default: '80')
(an integer)
--yolo_max_boxes: maximum number of detections at one time
(default: '100')
(an integer)
--yolo_iou_threshold: iou threshold for how close two boxes can be before they are detected as one box
(default: 0.5)
(a float)
--yolo_score_threshold: score threshold for confidence level in detection for detection to count
(default: 0.5)
(a float)
Author: theAIGuysCode
Download Link: Download The Source Code
Official Website: https://github.com/theAIGuysCode/yolov3_deepsort
License: GPL-3.0 license
Subscribe: https://www.youtube.com/c/TheAIGuy/featured
#tensorflow #yolov3 #python
1602954000
I recently had to convert a deep learning model (a MobileNetV2 variant) from PyTorch to TensorFlow Lite. It was a long, complicated journey, involved jumping through a lot of hoops to make it work. I found myself collecting pieces of information from Stackoverflow posts and GitHub issues. My goal is to share my experience in an attempt to help someone else who is lost like I was.
DISCLAIMER: This is not a guide_ on how to properly do this conversion. I only wish to share my experience. I might have done it wrong (especially because I have no experience with Tensorflow). If you notice something that I could have done better/differently β please comment and Iβll update the post accordingly._
Convert a deep learning model (a MobileNetV2 variant) from Pytorch to TensorFlow Lite. The conversion process should be:
Pytorch βONNX β Tensorflow β TFLite
In order to test the converted models, a set of roughly 1,000 input tensors was generated, and the PyTorch modelβs output was calculated for each. That set was later used to test each of the converted models, by comparing their yielded outputs against the original outputs, via a mean error metric, over the entire set. The mean error reflects how different are the converted model outputs compared to the original PyTorch model outputs, over the same input.
I decided to treat a model with a mean error smaller than 1e-6 as a successfully converted model.
It might also be important to note that I added the batch dimension in the tensor, even though it was 1. I had no reason doing so other than a hunch that comes from my previous experience converting PyTorch to DLC models.
#mlops #tensorflow #onnx #pytorch #tflite
1651733745
This repository implements YOLOv3 and Deep SORT in order to perfrom real-time object tracking. Yolov3 is an algorithm that uses deep convolutional neural networks to perform object detection. We can feed these object detections into Deep SORT (Simple Online and Realtime Tracking with a Deep Association Metric) in order for a real-time object tracker to be created.
# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate tracker-cpu
# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate tracker-gpu
# TensorFlow CPU
pip install -r requirements.txt
# TensorFlow GPU
pip install -r requirements-gpu.txt
# Ubuntu 18.04
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt install nvidia-driver-430
# Windows/Other
https://www.nvidia.com/Download/index.aspx
For Linux: Let's download official yolov3 weights pretrained on COCO dataset.
# yolov3
wget https://pjreddie.com/media/files/yolov3.weights -O weights/yolov3.weights
# yolov3-tiny
wget https://pjreddie.com/media/files/yolov3-tiny.weights -O weights/yolov3-tiny.weights
For Windows: You can download the yolov3 weights by clicking here and yolov3-tiny here then save them to the weights folder.
Learn How To Train Custom YOLOV3 Weights Here: https://www.youtube.com/watch?v=zJDUhGL26iU
Add your custom weights file to weights folder and your custom .names file into data/labels folder.
Load the weights using load_weights.py
script. This will convert the yolov3 weights into TensorFlow .tf model files!
# yolov3
python load_weights.py
# yolov3-tiny
python load_weights.py --weights ./weights/yolov3-tiny.weights --output ./weights/yolov3-tiny.tf --tiny
# yolov3-custom (add --tiny flag if your custom weights were trained for tiny model)
python load_weights.py --weights ./weights/<YOUR CUSTOM WEIGHTS FILE> --output ./weights/yolov3-custom.tf --num_classes <# CLASSES>
After executing one of the above lines, you should see proper .tf files in your weights folder. You are now ready to run object tracker.
Now you can run the object tracker for whichever model you have created, pretrained, tiny, or custom.
# yolov3 on video
python object_tracker.py --video ./data/video/test.mp4 --output ./data/video/results.avi
#yolov3 on webcam
python object_tracker.py --video 0 --output ./data/video/results.avi
#yolov3-tiny
python object_tracker.py --video ./data/video/test.mp4 --output ./data/video/results.avi --weights ./weights/yolov3-tiny.tf --tiny
#yolov3-custom (add --tiny flag if your custom weights were trained for tiny model)
python object_tracker.py --video ./data/video/test.mp4 --output ./data/video/results.avi --weights ./weights/yolov3-custom.tf --num_classes <# CLASSES> --classes ./data/labels/<YOUR CUSTOM .names FILE>
The output flag saves your object tracker results as an avi file for you to watch back. It is not necessary to have the flag if you don't want to save the resulting video.
There is a test video uploaded in the data/video folder called test.mp4. If you followed all the steps properly with the pretrained coco yolov3.weights model then when your run the object tracker wiht the first command above you should see the following.
This is a demo of running the object tracker using the above command for running the object tracker on your webcam.
load_weights.py:
--output: path to output
(default: './weights/yolov3.tf')
--[no]tiny: yolov3 or yolov3-tiny
(default: 'false')
--weights: path to weights file
(default: './weights/yolov3.weights')
--num_classes: number of classes in the model
(default: '80')
(an integer)
object_tracker.py:
--classes: path to classes file
(default: './data/labels/coco.names')
--video: path to input video (use 0 for webcam)
(default: './data/video/test.mp4')
--output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)
(default: None)
--output_format: codec used in VideoWriter when saving video to file
(default: 'XVID)
--[no]tiny: yolov3 or yolov3-tiny
(default: 'false')
--weights: path to weights file
(default: './weights/yolov3.tf')
--num_classes: number of classes in the model
(default: '80')
(an integer)
--yolo_max_boxes: maximum number of detections at one time
(default: '100')
(an integer)
--yolo_iou_threshold: iou threshold for how close two boxes can be before they are detected as one box
(default: 0.5)
(a float)
--yolo_score_threshold: score threshold for confidence level in detection for detection to count
(default: 0.5)
(a float)
Author: theAIGuysCode
Download Link: Download The Source Code
Official Website: https://github.com/theAIGuysCode/yolov3_deepsort
License: GPL-3.0 License
1616510100
In this blog, we are going to see what the ONNX standard is, its components and how to carry out interoperability between different Deep Learning frameworks. This blog will address the following sections:
So letβs get started!
#onnx #tensorflow #interoperability #pytorch #onnx-runtime
1651649644
YOLOv5 π is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
Documentation
See the YOLOv5 Docs for full documentation on training, testing and deployment.
Quick Start Examples
Install
Clone repo and install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7.
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
Inference
YOLOv5 PyTorch Hub inference. Models download automatically from the latest YOLOv5 release.
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5n - yolov5x6, custom
# Images
img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print() # or .show(), .save(), .crop(), .pandas(), etc.
Inference with detect.py
detect.py
runs inference on a variety of sources, downloading models automatically from the latest YOLOv5 release and saving results to runs/detect
.
python detect.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
Training
The commands below reproduce YOLOv5 COCO results. Models and datasets download automatically from the latest YOLOv5 release. Training times for YOLOv5n/s/m/l/x are 1/2/4/6/8 days on a V100 GPU (Multi-GPU times faster). Use the largest --batch-size
possible, or pass --batch-size -1
for YOLOv5 AutoBatch. Batch sizes shown for V100-16GB.
python train.py --data coco.yaml --cfg yolov5n.yaml --weights '' --batch-size 128
yolov5s 64
yolov5m 40
yolov5l 24
yolov5x 16
Tutorials
Integrations
Weights and Biases | Roboflow β NEW |
---|---|
Automatically track and visualize all your YOLOv5 training runs in the cloud with Weights & Biases | Label and export your custom datasets directly to YOLOv5 for training with Roboflow |
Why YOLOv5
YOLOv5-P5 640 Figure (click to expand)
Figure Notes (click to expand)
python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt
Model | size (pixels) | mAPval 0.5:0.95 | mAPval 0.5 | Speed CPU b1 (ms) | Speed V100 b1 (ms) | Speed V100 b32 (ms) | params (M) | FLOPs @640 (B) |
---|---|---|---|---|---|---|---|---|
[YOLOv5n][assets] | 640 | 28.0 | 45.7 | 45 | 6.3 | 0.6 | 1.9 | 4.5 |
[YOLOv5s][assets] | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
[YOLOv5m][assets] | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
[YOLOv5l][assets] | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
[YOLOv5x][assets] | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
[YOLOv5n6][assets] | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 |
[YOLOv5s6][assets] | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 |
[YOLOv5m6][assets] | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 |
[YOLOv5l6][assets] | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 |
[YOLOv5x6][assets] + [TTA][TTA] | 1280 1536 | 55.0 55.8 | 72.7 72.7 | 3136 - | 26.2 - | 19.4 - | 140.7 - | 209.8 - |
Table Notes (click to expand)
python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65
python val.py --data coco.yaml --img 640 --task speed --batch 1
python val.py --data coco.yaml --img 1536 --iou 0.7 --augment
Contact
For YOLOv5 bugs and feature requests please visit GitHub Issues. For business inquiries or professional support requests please visit https://ultralytics.com/contact.
Download Details:
Author: ultralytics
Source Code: https://github.com/ultralytics/yolov5
License: GPL-3.0 License
#yolo #pytorch