Michio JP

Michio JP

1630771176

Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO

The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

Project Page | Paper

Table of Contents

  • Setup
  • Reproduction
  • Plug-and-play
  • Train
  • Citation

Setup

Install Dependencies

conda env create -f environment.yml

Install Habitat

The repo is tested under the following commits of habitat-lab and habitat-sim.

habitat-lab == d0db1b55be57abbacc5563dca2ca14654c545552
habitat-sim == 020041d75eaf3c70378a9ed0774b5c67b9d3ce99

Note, to align with Habitat Challenge 2020 settings (see Step 36 in the Dockerfile), when installing habitat-sim, we compiled without CUDA support as

python setup.py install --headless

There was a discrepancy between noises models in CPU and CPU versions which has now been fixed, see this issue. Therefore, to reproduce the results in the paper with our pre-trained weights, you need to use noises model of CPU-version.

Download Data

We need two datasets to enable running of this repo:

  1. Gibson scene dataset
  2. PointGoal Navigation splits, we need pointnav_gibson_v2.zip.

Please follow Habitat's instruction to download them. We assume all data is put under ./dataset with structure:

.
+-- dataset
|  +-- Gibson
|  |  +-- gibson
|  |  |  +-- Adrian.glb
|  |  |  +-- Adrian.navmesh
|  |  |  ...
|  +-- habitat_datasets
|  |  +-- pointnav
|  |  |  +-- gibson
|  |  |  |  +-- v2
|  |  |  |  |  +-- train
|  |  |  |  |  +-- val
|  |  |  |  |  +-- valmini

Reproduce

Download pretrained checkpoints of RL navigation policy and VO from this link. Put them under pretrained_ckpts with the following structure:

.
+-- pretrained_ckpts
|  +-- rl
|  |  +-- no_tune
|  |  |  +-- rl_no_tune.pth
|  |  +-- tune_vo
|  |  |  +-- rl_tune_vo.pth
|  +-- vo
|  |  +-- act_forward.pth
|  |  +-- act_left_right_inv_joint.pth

Run the following command to reproduce navigation results. On Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz and a Nvidia GeForce GTX 1080 Ti, it takes around 4.5 hours to complete evaluation on all 994 episodes with navigation policy tuned with VO.

cd /path/to/this/repo
export POINTNAV_VO_ROOT=$PWD

export NUMBA_NUM_THREADS=1 && \
export NUMBA_THREADING_LAYER=workqueue && \
conda activate pointnav-vo && \
python ${POINTNAV_VO_ROOT}/launch.py \
--repo-path ${POINTNAV_VO_ROOT} \
--n_gpus 1 \
--task-type rl \
--noise 1 \
--run-type eval \
--addr 127.0.1.1 \
--port 8338

Use VO as a Drop-in Module

We provide a class BaseRLTrainerWithVO that contains all necessary functions to compute odometry in base_trainer_with_vo.py. Specifically, you can use _compute_local_delta_states_from_vo to compute odometry based on adjacent observations. The code sturcture will be something like:

local_delta_states = _compute_local_delta_states_from_vo(prev_obs, cur_obs, action)
cur_goal = compute_goal_pos(prev_goal, local_delta_states)

To get more sense about how to use this function, please refer to challenge2020_agent.py, which is the agent we used in HabitatChallenge 2020.

Train Your Own VO

See details in TRAIN.md

Citation

Please cite the following papers if you found our model useful. Thanks!

Xiaoming Zhao, Harsh Agrawal, Dhruv Batra, and Alexander Schwing. The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation. ICCV 2021.

@inproceedings{ZhaoICCV2021,
  title={{The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation}},
  author={Xiaoming Zhao and Harsh Agrawal and Dhruv Batra and Alexander Schwing},
  booktitle={Proc. ICCV},
  year={2021},
}

Download Details:
 

Author: Xiaoming-Zhao
Download Link: Download The Source Code
Official Website: https://github.com/Xiaoming-Zhao/PointNav-VO 
License: Apache-2.0

What is GEEK

Buddha Community

Arvel  Parker

Arvel Parker

1591177440

Visual Analytics and Advanced Data Visualization

Visual Analytics is the scientific visualization to emerge an idea to present data in such a way so that it could be easily determined by anyone.

It gives an idea to the human mind to directly interact with interactive visuals which could help in making decisions easy and fast.

Visual Analytics basically breaks the complex data in a simple way.

The human brain is fast and is built to process things faster. So Data visualization provides its way to make things easy for students, researchers, mathematicians, scientists e

#blogs #data visualization #business analytics #data visualization techniques #visual analytics #visualizing ml models

Michio JP

Michio JP

1630771176

Visual Odometry Techniques for Embodied PointGoal Navigation

PointNav-VO

The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation

Project Page | Paper

Table of Contents

  • Setup
  • Reproduction
  • Plug-and-play
  • Train
  • Citation

Setup

Install Dependencies

conda env create -f environment.yml

Install Habitat

The repo is tested under the following commits of habitat-lab and habitat-sim.

habitat-lab == d0db1b55be57abbacc5563dca2ca14654c545552
habitat-sim == 020041d75eaf3c70378a9ed0774b5c67b9d3ce99

Note, to align with Habitat Challenge 2020 settings (see Step 36 in the Dockerfile), when installing habitat-sim, we compiled without CUDA support as

python setup.py install --headless

There was a discrepancy between noises models in CPU and CPU versions which has now been fixed, see this issue. Therefore, to reproduce the results in the paper with our pre-trained weights, you need to use noises model of CPU-version.

Download Data

We need two datasets to enable running of this repo:

  1. Gibson scene dataset
  2. PointGoal Navigation splits, we need pointnav_gibson_v2.zip.

Please follow Habitat's instruction to download them. We assume all data is put under ./dataset with structure:

.
+-- dataset
|  +-- Gibson
|  |  +-- gibson
|  |  |  +-- Adrian.glb
|  |  |  +-- Adrian.navmesh
|  |  |  ...
|  +-- habitat_datasets
|  |  +-- pointnav
|  |  |  +-- gibson
|  |  |  |  +-- v2
|  |  |  |  |  +-- train
|  |  |  |  |  +-- val
|  |  |  |  |  +-- valmini

Reproduce

Download pretrained checkpoints of RL navigation policy and VO from this link. Put them under pretrained_ckpts with the following structure:

.
+-- pretrained_ckpts
|  +-- rl
|  |  +-- no_tune
|  |  |  +-- rl_no_tune.pth
|  |  +-- tune_vo
|  |  |  +-- rl_tune_vo.pth
|  +-- vo
|  |  +-- act_forward.pth
|  |  +-- act_left_right_inv_joint.pth

Run the following command to reproduce navigation results. On Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz and a Nvidia GeForce GTX 1080 Ti, it takes around 4.5 hours to complete evaluation on all 994 episodes with navigation policy tuned with VO.

cd /path/to/this/repo
export POINTNAV_VO_ROOT=$PWD

export NUMBA_NUM_THREADS=1 && \
export NUMBA_THREADING_LAYER=workqueue && \
conda activate pointnav-vo && \
python ${POINTNAV_VO_ROOT}/launch.py \
--repo-path ${POINTNAV_VO_ROOT} \
--n_gpus 1 \
--task-type rl \
--noise 1 \
--run-type eval \
--addr 127.0.1.1 \
--port 8338

Use VO as a Drop-in Module

We provide a class BaseRLTrainerWithVO that contains all necessary functions to compute odometry in base_trainer_with_vo.py. Specifically, you can use _compute_local_delta_states_from_vo to compute odometry based on adjacent observations. The code sturcture will be something like:

local_delta_states = _compute_local_delta_states_from_vo(prev_obs, cur_obs, action)
cur_goal = compute_goal_pos(prev_goal, local_delta_states)

To get more sense about how to use this function, please refer to challenge2020_agent.py, which is the agent we used in HabitatChallenge 2020.

Train Your Own VO

See details in TRAIN.md

Citation

Please cite the following papers if you found our model useful. Thanks!

Xiaoming Zhao, Harsh Agrawal, Dhruv Batra, and Alexander Schwing. The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation. ICCV 2021.

@inproceedings{ZhaoICCV2021,
  title={{The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation}},
  author={Xiaoming Zhao and Harsh Agrawal and Dhruv Batra and Alexander Schwing},
  booktitle={Proc. ICCV},
  year={2021},
}

Download Details:
 

Author: Xiaoming-Zhao
Download Link: Download The Source Code
Official Website: https://github.com/Xiaoming-Zhao/PointNav-VO 
License: Apache-2.0

Arvel  Parker

Arvel Parker

1591184760

Visual Analytics Services for Data-Driven Decision Making

Visual analytics is the process of collecting, examining complex and large data sets (structured or unstructured) to get useful information to draw conclusions about the datasets and visualize the data or information in the form of interactive visual interfaces and graphical manner.

Data analytics is usually accomplished by extracting or collecting data from different data sources in the form of numbers, statistics and overall activity of any organization, with different deep learning and analytics tools, which is then processed using data visualization software and presented in the form of graphical charts, figures, and bars.

In today technology world, data are reproduced in incredible rate and amount. Visual Analytics helps the world to make the vast and complex amount of data useful and readable. Visual Analytics is the process to collect and store the data at a faster rate than analyze the data and make it helpful.

As human brain process visual content better than it processes plain text. So using advanced visual interfaces, humans may directly interact with the data analysis capabilities of today’s computers and allow them to make well-informed decisions in complex situations.

It allows you to create beautiful, interactive dashboards or reports that are immediately available on the web or a mobile device. The tool has a Data Explorer that makes it easy for the novice analyst to create forecasts, decision trees, or other fancy statistical methods.

#blogs #data visualization #data visualization tools #visual analytics #visualizing ml models

Visual Perception

Why do we visualize data?

It helps us to comprehend _huge _amounts of data by compressing it into a simple, easy to understand visualization. It helps us to find hidden patterns or see underlying problems in the data itself which might not have been obvious without a good chart.

Our brain is specialized to perceive the physical world around us as efficiently as possible. Evidence also suggests that we all develop the same visual systems, regardless of our environment or culture. This suggests that the development of the visual system isn’t solely based on our environment but is the result of millions of years of evolution. Which would contradict the tabula rasa theory (Ware 2021 ). Sorry John Locke. Our visual system splits tasks and thus has specialized regions that are responsible for segmentation (early rapid-processing), edge orientation detection, or color and light perception. We are able to extract features and find patterns with ease.

It is interesting that on a higher level of visual perception (visual cognition), our brains are able to highlight colors and shapes to focus on certain aspects. If we search for red-colored highways on a road map, we can use our visual cognition to highlight the red roads and put the other colors in the background. (Ware 2021)

#data-visualization #gestalt-principles #visualization #data-science #visual-variables

Trinity  Kub

Trinity Kub

1594769040

Bottom Tab View inside Navigation Drawer with React Navigation V5

Bottom Tab View + Navigation Drawer

This is an example of Bottom Tab View inside Navigation Drawer / Sidebar with React Navigation in React Native. We will use react-navigation to make a navigation drawer and Tab in this example. I hope you have already seen our post on React Native Navigation Drawer because in this post we are just extending the last post to show the Bottom Tab View inside the Navigation Drawer.

In this example, we have a navigation drawer with 3 screens in the navigation menu and a Bottom Tab on the first screen of the Navigation Drawer. When we open Screen1 the Bottom Tab will be visible and on the other options, this Bottom Tab will be invisible.

To Create a Drawer Navigator

<NavigationContainer>
  <Drawer.Navigator
    drawerContentOptions={{
      activeTintColor: '#e91e63',
      itemStyle: { marginVertical: 5 },
    }}>
    <Drawer.Screen
      name="HomeScreenStack"
      options={{ drawerLabel: 'Home Screen Option' }}
      component={HomeScreenStack} />
    <Drawer.Screen
      name="SettingScreenStack"
      options={{ drawerLabel: 'Setting Screen Option' }}
      component={SettingScreenStack} />
  </Drawer.Navigator>
</NavigationContainer>

To Create Bottom Tab Navigator

<Tab.Navigator
  initialRouteName="HomeScreen"
  tabBarOptions={{
    activeTintColor: 'tomato',
    inactiveTintColor: 'gray',
    style: {
      backgroundColor: '#e0e0e0',
    },
    labelStyle: {
      textAlign: 'center',
      fontSize: 16
    },
  }}>
  <Tab.Screen
    name="HomeScreen"
    component={HomeScreen}
    options={{
      tabBarLabel: 'Home Screen',
      // tabBarIcon: ({ color, size }) => (
      //   <MaterialCommunityIcons name="home" color={color} size={size} />
      // ),
    }}  />
  <Tab.Screen
    name="ExploreScreen"
    component={ExploreScreen}
    options={{
      tabBarLabel: 'Explore Screen',
      // tabBarIcon: ({ color, size }) => (
      //   <MaterialCommunityIcons name="settings" color={color} size={size} />
      // ),
    }} />
</Tab.Navigator>

In this example, we will make a Tab Navigator inside a Drawer Navigator so let’s get started.

To Make a React Native App

Getting started with React Native will help you to know more about the way you can make a React Native project. We are going to use react-native init to make our React Native App. Assuming that you have node installed, you can use npm to install the react-native-cli command line utility. Open the terminal and go to the workspace and run

npm install -g react-native-cli

Run the following commands to create a new React Native project

react-native init ProjectName

If you want to start a new project with a specific React Native version, you can use the --version argument:

react-native init ProjectName --version X.XX.X

react-native init ProjectName --version react-native@next


This will make a project structure with an index file named App.js in your project directory.

#bottom navigation #drawer navigation #react #react navigation