1648137180
Implementation of Newcombe et al. 2015 DynamicFusion paper.
The code is based on this KinectFusion implemenation
Clone dynamicfusion and dependencies.
git clone https://github.com/mihaibujanca/dynamicfusion --recursive
Install NVIDIA drivers.
Alternatively a good tutorial with some common issues covered can be found here.
For fresh installs (this assumes you cloned your project in your home directory!):
chmod +x build.sh
./build.sh
If you are not on a fresh install, check build.sh
for building instructions and dependencies.
If you want to build the tests as well, set -DBUILD_TESTS=ON
.
To save frames showing the reconstruction progress, pass -DSAVE_RECONSTRUCTION_FRAMES=ON
. The frames will be saved in <project_root>/output
To build documentation, go to the project root directory and execute
doxygen -g
doxygen Doxyfile
./download_data
./build/bin/dynamicfusion data/umbrella
Dependencies:
Implicit dependency (needed by opencv_viz):
Install NVIDIA drivers and CUDA
Optionals:
Download the dataset.
Create a data
folder inside the project root directory.
Unzip the archive into data
and remove any files that are not .png.
Inside data
, create directories color
and depth
, and move color and depth frames to their corresponding folders.
To use with .oni captures or straight from a kinect device, use ./build/bin/dynamicfusion_kinect <path-to-oni>
or ./build/bin/dynamicfusion_kinect <device_id>
Note: currently, the frame rate is too low (10s / frame) to be able to cope with live inputs, so it is advisable that you capture your input first.
@InProceedings{Newcombe_2015_CVPR,
author = {Newcombe, Richard A. and Fox, Dieter and Seitz, Steven M.},
title = {DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}
The example dataset is taken from the VolumeDeform project.
@inbook{innmann2016volume,
author = "Innmann, Matthias and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Theobalt, Christian
and Stamminger, Marc",
editor = "Leibe, Bastian and Matas, Jiri and Sebe, Nicu and Welling, Max",
title = "VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction",
bookTitle = "Computer Vision -- ECCV 2016: 14th European Conference, Amsterdam, The Netherlands,
October 11-14, 2016, Proceedings, Part VIII",
year = "2016",
publisher = "Springer International Publishing",
address = "Cham",
pages = "362--379",
isbn = "978-3-319-46484-8",
doi = "10.1007/978-3-319-46484-8_22",
url = "http://dx.doi.org/10.1007/978-3-319-46484-8_22"
}
Download Details:
Author: mihaibujanca
Source Code: https://github.com/mihaibujanca/dynamicfusion
License: BSD-3-Clause License
1648137180
Implementation of Newcombe et al. 2015 DynamicFusion paper.
The code is based on this KinectFusion implemenation
Clone dynamicfusion and dependencies.
git clone https://github.com/mihaibujanca/dynamicfusion --recursive
Install NVIDIA drivers.
Alternatively a good tutorial with some common issues covered can be found here.
For fresh installs (this assumes you cloned your project in your home directory!):
chmod +x build.sh
./build.sh
If you are not on a fresh install, check build.sh
for building instructions and dependencies.
If you want to build the tests as well, set -DBUILD_TESTS=ON
.
To save frames showing the reconstruction progress, pass -DSAVE_RECONSTRUCTION_FRAMES=ON
. The frames will be saved in <project_root>/output
To build documentation, go to the project root directory and execute
doxygen -g
doxygen Doxyfile
./download_data
./build/bin/dynamicfusion data/umbrella
Dependencies:
Implicit dependency (needed by opencv_viz):
Install NVIDIA drivers and CUDA
Optionals:
Download the dataset.
Create a data
folder inside the project root directory.
Unzip the archive into data
and remove any files that are not .png.
Inside data
, create directories color
and depth
, and move color and depth frames to their corresponding folders.
To use with .oni captures or straight from a kinect device, use ./build/bin/dynamicfusion_kinect <path-to-oni>
or ./build/bin/dynamicfusion_kinect <device_id>
Note: currently, the frame rate is too low (10s / frame) to be able to cope with live inputs, so it is advisable that you capture your input first.
@InProceedings{Newcombe_2015_CVPR,
author = {Newcombe, Richard A. and Fox, Dieter and Seitz, Steven M.},
title = {DynamicFusion: Reconstruction and Tracking of Non-Rigid Scenes in Real-Time},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}
The example dataset is taken from the VolumeDeform project.
@inbook{innmann2016volume,
author = "Innmann, Matthias and Zollh{\"o}fer, Michael and Nie{\ss}ner, Matthias and Theobalt, Christian
and Stamminger, Marc",
editor = "Leibe, Bastian and Matas, Jiri and Sebe, Nicu and Welling, Max",
title = "VolumeDeform: Real-Time Volumetric Non-rigid Reconstruction",
bookTitle = "Computer Vision -- ECCV 2016: 14th European Conference, Amsterdam, The Netherlands,
October 11-14, 2016, Proceedings, Part VIII",
year = "2016",
publisher = "Springer International Publishing",
address = "Cham",
pages = "362--379",
isbn = "978-3-319-46484-8",
doi = "10.1007/978-3-319-46484-8_22",
url = "http://dx.doi.org/10.1007/978-3-319-46484-8_22"
}
Download Details:
Author: mihaibujanca
Source Code: https://github.com/mihaibujanca/dynamicfusion
License: BSD-3-Clause License
1626977280
HDR images encompass the information of multiple pictures with different exposures. In a scene which the source of light is uneven, a single shot may overexpose certain areas of the image and details will be lost due to elevated brightness. Conversely, this picture may also present underexposed areas which will also lead to information loss.
To create an HDR image you will need:
#hdr #opencv #computer-vision #python #opencv #opencv python
1591743681
Learn Free how to create a virtual pen and eraser with python and OpenCV with source code and complete guide. This entire application is built fundamentally on contour detection. It can be thought of as something like closed color curves on compromises that have the same color or intensity, it’s like a blob. In this project we use color masking to get the binary mask of our target color pen, then we use the counter detection to find the location of this pen and the contour to find it.
#python #create virtual pen and eraser with opencv #create virtual pen and eraser with python opencv #programming #opencv #python opencv
1604210160
By default, there is no need to enable OpenCV with CUDA for GPU processing, but during production, when you have heavy OpenCV manipulations to do on image/video files, we can make use of the OpenCV CUDA library to make those operations to run on GPU rather than CPU and it saves a lot of time.
It was not easy as it is said to connect the OpenCV library to enable it with CUDA, I had to go through a painful process for a week to establish the connection properly, also its both time & money consuming process. So this time I want to record the overall process for my future, as well as for others.
For the demonstration, I am renting an EC2 instance with a p3.8xlarge instance in the AWS, which has 4 Nvidia GPUs.
Source — AWS EC2 Pricing
So if you need any help in starting an EC2 instance for the first time, you can refer to my previous post on Step by Step Creation of an EC2 Instance in AWS and Access it via Putty & WinSCP and during the process select the GPU instance you require.
Now after ssh-ing into the instance, before we get into the process we need to install a lot of packages to make the environment ready.
_Note: I have consolidated all the commands I ran from start to end and added them at the bottom. If you are more curious find them here in this __link _and follow along.
Run the below commands one after another on your instance and also I have attested the screenshots to compare the outputs against mine.
All the screenshots used hereafter are sourced by the author.
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential cmake unzip pkg-config
sudo apt-get install gcc-6 g++-6
sudo apt-get install screen
sudo apt-get install libxmu-dev libxi-dev libglu1-mesa libglu1-mesa-dev
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libopenblas-dev libatlas-base-dev liblapack-dev gfortran
sudo apt-get install libhdf5-serial-dev
sudo apt-get install python3-dev python3-tk python-imaging-tk
sudo apt-get install libgtk-3-dev
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-get install nvidia-driver-418
sudo reboot
#opencv-in-ubuntu #opencv-python #cuda #nvidia #opencv #ubuntu
1624434810
Hello fellow learner! In this tutorial, we will learn how to write string text on Images in Python using the OpenCV putText() method. So let’s get started.
Table of Contents
OpenCV Python is a library of programming functions mainly aimed at real-time computer vision and image processing problems.
OpenCV contains putText()
method which is used to put text on any image. The method uses following parameters.
BGR
format, i.e., first blue color value, then green color value, and the red color value all in range 0 to 255.#python modules #opencv #opencv puttext() #writing text on images #opencv puttext() - writing text on images #puttext() - writing text