Royce  Reinger

Royce Reinger

1679443980

ElasticFusion: Real-time Dense Visual SLAM System

ElasticFusion

Real-time dense visual SLAM system capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera.

Related Publications

Please cite this work if you make use of our system in any of your own endeavors:

1. What do I need to build it?

1.1. Ubuntu

Ubuntu 22.04 on Xorg, NVIDIA drivers 510.73.05, CUDA driver 11.6, CUDA toolkit 11.5 (essentially whatever is in the Ubuntu repos).

sudo apt install -y cmake-qt-gui git build-essential libusb-1.0-0-dev libudev-dev openjdk-11-jdk freeglut3-dev libglew-dev libsuitesparse-dev zlib1g-dev libjpeg-dev
git clone https://github.com/mp3guy/ElasticFusion.git
cd ElasticFusion/
git submodule update --init
cd third-party/OpenNI2/
make -j8
cd ../Pangolin/
mkdir build
cd build
cmake .. -DEIGEN_INCLUDE_DIR=$HOME/ElasticFusion/third-party/Eigen/ -DBUILD_PANGOLIN_PYTHON=false
make -j8
cd ../../..
mkdir build
cd build/
cmake ..

2. How do I use it?

There are two subprojects in the repo:

  • The Core is the main engine which builds into a shared library that you can link into other projects and treat like an API.
  • The Tools where the graphical interface used to run the system on either live sensor data or a logged data file lives.

The executable (ElasticFusion) can take a bunch of parameters when launching it from the command line. They are as follows:

  • -cal : Loads a camera calibration file specified as fx fy cx cy.
  • -l : Processes the specified .klg log file.
  • -p : Loads ground truth poses to use instead of estimated pose.
  • -c : Surfel confidence threshold (default 10).
  • -d : Cutoff distance for depth processing (default 3m).
  • -i : Relative ICP/RGB tracking weight (default 10).
  • -ie : Local loop closure residual threshold (default 5e-05).
  • -ic : Local loop closure inlier threshold (default 35000).
  • -cv : Local loop closure covariance threshold (default 1e-05).
  • -pt : Global loop closure photometric threshold (default 115).
  • -ft : Fern encoding threshold (default 0.3095).
  • -t : Time window length (default 200).
  • -s : Frames to skip at start of log.
  • -e : Cut off frame of log.
  • -f : Flip RGB/BGR.
  • -icl : Enable this if using the ICL-NUIM dataset (flips normals to account for negative focal length on that data).
  • -o : Open loop mode.
  • -rl : Enable relocalisation.
  • -fs : Frame skip if processing a log to simulate real-time.
  • -q : Quit when finished a log.
  • -fo : Fast odometry (single level pyramid).
  • -nso : Disables SO(3) pre-alignment in tracking.
  • -r : Rewind and loop log forever.
  • -ftf : Do frame-to-frame RGB tracking.
  • -sc : Showcase mode (minimal GUI).

Essentially by default ./ElasticFusion will try run off an attached ASUS sensor live. You can provide a .klg log file instead with the -l parameter. You can capture .klg format logs using either Logger1 or Logger2.

3. How do I just use the Core API?

The libefusion.so shared library which gets built by the Core is what you want to link against.

To then use the Core API, make sure to include the header file in your source file:

    #include <ElasticFusion.h>

Initialise the static configuration parameters once somewhere at the start of your program:

    Resolution::getInstance(640, 480);
    Intrinsics::getInstance(528, 528, 320, 240);

Create an OpenGL context before creating an ElasticFusion object, as ElasticFusion uses OpenGL internally. You can do this whatever way you wish, using Pangolin is probably easiest given it's a dependency:

    pangolin::Params windowParams;
    windowParams.Set("SAMPLE_BUFFERS", 0);
    windowParams.Set("SAMPLES", 0);
    pangolin::CreateWindowAndBind("Main", 1280, 800, windowParams);

Make an ElasticFusion object and start using it:

    ElasticFusion eFusion;
    eFusion.processFrame(rgb, depth, timestamp, currentPose, weightMultiplier);

See the source code of MainController.cpp to see more usage.

4. Datasets

We have provided a sample dataset which you can run easily with ElasticFusion for download here. Launch it as follows:

./ElasticFusion -l dyson_lab.klg

5. License

ElasticFusion is freely available for non-commercial use only. Full terms and conditions which govern its use are detailed here and in the LICENSE.txt file.

6. FAQ

What are the hardware requirements?

A very fast nVidia GPU (3.5TFLOPS+), and a fast CPU (something like an i7). If you want to use a non-nVidia GPU you can rewrite the tracking code or substitute it with something else, as the rest of the pipeline is actually written in the OpenGL Shading Language.

How can I get performance statistics?

Download Stopwatch and run StopwatchViewer at the same time as ElasticFusion.

I ran a large dataset and got assert(graph.size() / 16 < MAX_NODES) failed

Currently there's a limit on the number of nodes in the deformation graph down to lazy coding (using a really wide texture instead of a proper 2D one). So we're bound by the maximum dimension of a texture, which is 16384 on modern cards/OpenGL. Either fix the code so this isn't a problem any more, or increase the modulo factor in Core/Shaders/sample.geom.

I have a nice new laptop with a good GPU but it's still slow

If your laptop is running on battery power the GPU will throttle down to save power, so that's unlikely to work (as an aside, Kintinuous will run at 30Hz on a modern laptop on battery power these days). You can try disabling SO(3) pre-alignment, enabling fast odometry, only using either ICP or RGB tracking and not both, running in open loop mode or disabling the tracking pyramid. All of these will cost you accuracy.

I saved a map, how can I view it?

Download Meshlab. Select Render->Shaders->Splatting.

The map keeps getting corrupted - tracking is failing - loop closures are incorrect/not working

Firstly, if you're running live and not processing a log file, ensure you're hitting 30Hz, this is important. Secondly, you cannot move the sensor extremely fast because this violates the assumption behind projective data association. In addition to this, you're probably using a primesense, which means you're suffering from motion blur, unsynchronised cameras and rolling shutter. All of these are aggravated by fast motion and hinder tracking performance.

If you're not getting loop closures and expecting some, pay attention to the inlier and residual graphs in the bottom right, these are an indicator of how close you are to a local loop closure. For global loop closures, you're depending on fern keyframe encoding to save you, which like all appearance-based place recognition methods, has its limitations.

Is there a ROS bridge/node?

No. The system relies on an extremely fast and tight coupling between the mapping and tracking on the GPU, which I don't believe ROS supports natively in terms of message passing.

This doesn't seem to work like it did in the videos/papers

A substantial amount of refactoring was carried out in order to open source this system, including rewriting a lot of functionality to avoid certain licenses and reduce dependencies. Although great care was taken during this process, it is possible that performance regressions were introduced and have not yet been discovered.


Download Details:

Author: mp3guy
Source Code: https://github.com/mp3guy/ElasticFusion 
License: View license

#machinelearning #cpluplus #system #realtime 

What is GEEK

Buddha Community

ElasticFusion: Real-time Dense Visual SLAM System
Ian  Robinson

Ian Robinson

1621644000

4 Real-Time Data Analytics Predictions for 2021

Data management, analytics, data science, and real-time systems will converge this year enabling new automated and self-learning solutions for real-time business operations.

The global pandemic of 2020 has upended social behaviors and business operations. Working from home is the new normal for many, and technology has accelerated and opened new lines of business. Retail and travel have been hit hard, and tech-savvy companies are reinventing e-commerce and in-store channels to survive and thrive. In biotech, pharma, and healthcare, analytics command centers have become the center of operations, much like network operation centers in transport and logistics during pre-COVID times.

While data management and analytics have been critical to strategy and growth over the last decade, COVID-19 has propelled these functions into the center of business operations. Data science and analytics have become a focal point for business leaders to make critical decisions like how to adapt business in this new order of supply and demand and forecast what lies ahead.

In the next year, I anticipate a convergence of data, analytics, integration, and DevOps to create an environment for rapid development of AI-infused applications to address business challenges and opportunities. We will see a proliferation of API-led microservices developer environments for real-time data integration, and the emergence of data hubs as a bridge between at-rest and in-motion data assets, and event-enabled analytics with deeper collaboration between data scientists, DevOps, and ModelOps developers. From this, an ML engineer persona will emerge.

#analytics #artificial intelligence technologies #big data #big data analysis tools #from our experts #machine learning #real-time decisions #real-time analytics #real-time data #real-time data analytics

Jessica Smith

Jessica Smith

1612606870

REAL TIME CHAT SOLUTIONS SERVICES FOR MOBILE APPS

Build a Real Time chat application that can integrated into your social handles. Add more life to your website or support portal with a real time chat solutions for mobile apps that shows online presence indicators, typing status, timestamp, multimedia sharing and much more. Users can also log into the live chat app using their social media logins sparing them from the need to remember usernames and passwords. For more information call us at +18444455767 or email us at hello@sisgain.com or Visit: https://sisgain.com/instant-real-time-chat-solutions-mobile-apps

#real time chat solutions for mobile apps #real time chat app development solutions #live chat software for mobile #live chat software solutions #real time chat app development #real time chat applications in java script

Ruth  Nabimanya

Ruth Nabimanya

1620633584

System Databases in SQL Server

Introduction

In SSMS, we many of may noticed System Databases under the Database Folder. But how many of us knows its purpose?. In this article lets discuss about the System Databases in SQL Server.

System Database

Fig. 1 System Databases

There are five system databases, these databases are created while installing SQL Server.

  • Master
  • Model
  • MSDB
  • Tempdb
  • Resource
Master
  • This database contains all the System level Information in SQL Server. The Information in form of Meta data.
  • Because of this master database, we are able to access the SQL Server (On premise SQL Server)
Model
  • This database is used as a template for new databases.
  • Whenever a new database is created, initially a copy of model database is what created as new database.
MSDB
  • This database is where a service called SQL Server Agent stores its data.
  • SQL server Agent is in charge of automation, which includes entities such as jobs, schedules, and alerts.
TempDB
  • The Tempdb is where SQL Server stores temporary data such as work tables, sort space, row versioning information and etc.
  • User can create their own version of temporary tables and those are stored in Tempdb.
  • But this database is destroyed and recreated every time when we restart the instance of SQL Server.
Resource
  • The resource database is a hidden, read only database that holds the definitions of all system objects.
  • When we query system object in a database, they appear to reside in the sys schema of the local database, but in actually their definitions reside in the resource db.

#sql server #master system database #model system database #msdb system database #sql server system databases #ssms #system database #system databases in sql server #tempdb system database

Royce  Reinger

Royce Reinger

1679443980

ElasticFusion: Real-time Dense Visual SLAM System

ElasticFusion

Real-time dense visual SLAM system capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera.

Related Publications

Please cite this work if you make use of our system in any of your own endeavors:

1. What do I need to build it?

1.1. Ubuntu

Ubuntu 22.04 on Xorg, NVIDIA drivers 510.73.05, CUDA driver 11.6, CUDA toolkit 11.5 (essentially whatever is in the Ubuntu repos).

sudo apt install -y cmake-qt-gui git build-essential libusb-1.0-0-dev libudev-dev openjdk-11-jdk freeglut3-dev libglew-dev libsuitesparse-dev zlib1g-dev libjpeg-dev
git clone https://github.com/mp3guy/ElasticFusion.git
cd ElasticFusion/
git submodule update --init
cd third-party/OpenNI2/
make -j8
cd ../Pangolin/
mkdir build
cd build
cmake .. -DEIGEN_INCLUDE_DIR=$HOME/ElasticFusion/third-party/Eigen/ -DBUILD_PANGOLIN_PYTHON=false
make -j8
cd ../../..
mkdir build
cd build/
cmake ..

2. How do I use it?

There are two subprojects in the repo:

  • The Core is the main engine which builds into a shared library that you can link into other projects and treat like an API.
  • The Tools where the graphical interface used to run the system on either live sensor data or a logged data file lives.

The executable (ElasticFusion) can take a bunch of parameters when launching it from the command line. They are as follows:

  • -cal : Loads a camera calibration file specified as fx fy cx cy.
  • -l : Processes the specified .klg log file.
  • -p : Loads ground truth poses to use instead of estimated pose.
  • -c : Surfel confidence threshold (default 10).
  • -d : Cutoff distance for depth processing (default 3m).
  • -i : Relative ICP/RGB tracking weight (default 10).
  • -ie : Local loop closure residual threshold (default 5e-05).
  • -ic : Local loop closure inlier threshold (default 35000).
  • -cv : Local loop closure covariance threshold (default 1e-05).
  • -pt : Global loop closure photometric threshold (default 115).
  • -ft : Fern encoding threshold (default 0.3095).
  • -t : Time window length (default 200).
  • -s : Frames to skip at start of log.
  • -e : Cut off frame of log.
  • -f : Flip RGB/BGR.
  • -icl : Enable this if using the ICL-NUIM dataset (flips normals to account for negative focal length on that data).
  • -o : Open loop mode.
  • -rl : Enable relocalisation.
  • -fs : Frame skip if processing a log to simulate real-time.
  • -q : Quit when finished a log.
  • -fo : Fast odometry (single level pyramid).
  • -nso : Disables SO(3) pre-alignment in tracking.
  • -r : Rewind and loop log forever.
  • -ftf : Do frame-to-frame RGB tracking.
  • -sc : Showcase mode (minimal GUI).

Essentially by default ./ElasticFusion will try run off an attached ASUS sensor live. You can provide a .klg log file instead with the -l parameter. You can capture .klg format logs using either Logger1 or Logger2.

3. How do I just use the Core API?

The libefusion.so shared library which gets built by the Core is what you want to link against.

To then use the Core API, make sure to include the header file in your source file:

    #include <ElasticFusion.h>

Initialise the static configuration parameters once somewhere at the start of your program:

    Resolution::getInstance(640, 480);
    Intrinsics::getInstance(528, 528, 320, 240);

Create an OpenGL context before creating an ElasticFusion object, as ElasticFusion uses OpenGL internally. You can do this whatever way you wish, using Pangolin is probably easiest given it's a dependency:

    pangolin::Params windowParams;
    windowParams.Set("SAMPLE_BUFFERS", 0);
    windowParams.Set("SAMPLES", 0);
    pangolin::CreateWindowAndBind("Main", 1280, 800, windowParams);

Make an ElasticFusion object and start using it:

    ElasticFusion eFusion;
    eFusion.processFrame(rgb, depth, timestamp, currentPose, weightMultiplier);

See the source code of MainController.cpp to see more usage.

4. Datasets

We have provided a sample dataset which you can run easily with ElasticFusion for download here. Launch it as follows:

./ElasticFusion -l dyson_lab.klg

5. License

ElasticFusion is freely available for non-commercial use only. Full terms and conditions which govern its use are detailed here and in the LICENSE.txt file.

6. FAQ

What are the hardware requirements?

A very fast nVidia GPU (3.5TFLOPS+), and a fast CPU (something like an i7). If you want to use a non-nVidia GPU you can rewrite the tracking code or substitute it with something else, as the rest of the pipeline is actually written in the OpenGL Shading Language.

How can I get performance statistics?

Download Stopwatch and run StopwatchViewer at the same time as ElasticFusion.

I ran a large dataset and got assert(graph.size() / 16 < MAX_NODES) failed

Currently there's a limit on the number of nodes in the deformation graph down to lazy coding (using a really wide texture instead of a proper 2D one). So we're bound by the maximum dimension of a texture, which is 16384 on modern cards/OpenGL. Either fix the code so this isn't a problem any more, or increase the modulo factor in Core/Shaders/sample.geom.

I have a nice new laptop with a good GPU but it's still slow

If your laptop is running on battery power the GPU will throttle down to save power, so that's unlikely to work (as an aside, Kintinuous will run at 30Hz on a modern laptop on battery power these days). You can try disabling SO(3) pre-alignment, enabling fast odometry, only using either ICP or RGB tracking and not both, running in open loop mode or disabling the tracking pyramid. All of these will cost you accuracy.

I saved a map, how can I view it?

Download Meshlab. Select Render->Shaders->Splatting.

The map keeps getting corrupted - tracking is failing - loop closures are incorrect/not working

Firstly, if you're running live and not processing a log file, ensure you're hitting 30Hz, this is important. Secondly, you cannot move the sensor extremely fast because this violates the assumption behind projective data association. In addition to this, you're probably using a primesense, which means you're suffering from motion blur, unsynchronised cameras and rolling shutter. All of these are aggravated by fast motion and hinder tracking performance.

If you're not getting loop closures and expecting some, pay attention to the inlier and residual graphs in the bottom right, these are an indicator of how close you are to a local loop closure. For global loop closures, you're depending on fern keyframe encoding to save you, which like all appearance-based place recognition methods, has its limitations.

Is there a ROS bridge/node?

No. The system relies on an extremely fast and tight coupling between the mapping and tracking on the GPU, which I don't believe ROS supports natively in terms of message passing.

This doesn't seem to work like it did in the videos/papers

A substantial amount of refactoring was carried out in order to open source this system, including rewriting a lot of functionality to avoid certain licenses and reduce dependencies. Although great care was taken during this process, it is possible that performance regressions were introduced and have not yet been discovered.


Download Details:

Author: mp3guy
Source Code: https://github.com/mp3guy/ElasticFusion 
License: View license

#machinelearning #cpluplus #system #realtime 

Implementing Real-time Object Detection System using PyTorch and OpenCV

Hands-On Guide to implement real-time object detection system using python

The Self-Driving car might still be having difficulties understanding the difference between humans and garbage can, but that does not take anything away from the amazing progress state-of-the-art object detection models have made in the last decade.

Combine that with the image processing abilities of libraries like OpenCV, it is much easier today to build a real-time object detection system prototype in hours. In this guide, I will try to show you how to develop sub-systems that go into a simple object detection application and how to put all of that together.

Python vs C++

Reading The Video Stream

Load the Model

Scoring a Single Frame

#artificial-intelligence #python #programming #implementing real-time object detection system #implementing real-time object detection system using pytorch and opencv #pytorch