Grace  Lesch

Grace Lesch

1639785600

RetrievalFuse: Neural 3D Scene Reconstruction with A Database

RetrievalFuse

Paper | Project Page | Video

RetrievalFuse: Neural 3D Scene Reconstruction with a Database 
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai 
ICCV2021

This repository contains the code for the ICCV 2021 paper RetrievalFuse, a novel approach for 3D reconstruction from low resolution distance field grids and from point clouds.

In contrast to traditional generative learned models which encode the full generative process into a neural network and can struggle with maintaining local details at the scene level, we introduce a new method that directly leverages scene geometry from the training database.

File and Folders


Broad code structure is as follows:

File / FolderDescription
config/super_resolutionSuper-resolution experiment configs
config/surface_reconstructionSurface reconstruction experiment configs
config/baseDefaults for configurations
config/config_handler.pyConfig file parser
data/splitsTraining and validation splits for different datasets
dataset/scene.pySceneHandler class for managing access to scene data samples
dataset/patched_scene_dataset.pyPytorch dataset class for scene data
external/ChamferDistancePytorchFor calculating rough chamfer distance between prediction and target while training
model/attention.pyAttention, folding and unfolding modules
model/loss.pyLoss functions
model/refinement.pyRefinement network
model/retrieval.pyRetrieval network
model/unet.pyU-Net model used as a backbone in refinement network
runs/Checkpoint and visualizations for experiments dumped here
trainer/train_retrieval.pyLightning module for training retrieval network
trainer/train_refinement.pyLightning module for training refinement network
util/arguments.pyArgument parsing (additional arguments apart from those in config)
util/filesystem_logger.pyFor copying source code for each run in the experiment log directory
util/metrics.pyRough metrics for logging during training
util/mesh_metrics.pyFinal metrics on meshes
util/retrieval.pyScript to dump retrievals once retrieval networks have been trained; needed for training refinement.
util/visualizations.pyUtility scripts for visualizations

Further, the data/ directory has the following layout

data                    # root data directory
├── sdf_008             # low-res (8^3) distance fields
    ├── <dataset_0>     
        ├── <sample_0>
        ├── <sample_1>
        ├── <sample_2>
        ...
    ├── <dataset_1>
    ...
├── sdf_016             # low-res (16^3) distance fields
    ├── <dataset_0>
        ├── <sample_0>
        ├── <sample_1>
        ├── <sample_2>
        ...
    ├── <dataset_1>
    ...
├── sdf_064             # high-res (64^3) distance fields
    ├── <dataset_0>
            ├── <sample_0>
            ├── <sample_1>
            ├── <sample_2>
            ...
        ├── <dataset_1>
        ...
├── pc_20K              # point cloud inputs
    ├── <dataset_0>
        ├── <sample_0>
        ├── <sample_1>
        ├── <sample_2>
        ...
    ├── <dataset_1>
    ...
├── splits              # train/val splits
├── size                # data needed by SceneHandler class (autocreated on first run)
├── occupancy           # data needed by SceneHandler class (autocreated on first run)

Dependencies


Install the dependencies using pip

pip install -r requirements.txt

Be sure that you pull the ChamferDistancePytorch submodule in external.

Data Preparation


For ShapeNetV2 and Matterport, get the appropriate meshes from the datasets. For 3DFRONT get the 3DFUTURE meshes and 3DFRONT scripts. For getting 3DFRONT meshes use our fork of 3D-FRONT-ToolBox to create room meshes.

Once you have the meshes, use our fork of sdf-gen to create distance field low-res inputs and high-res targets. For creating point cloud inputs simply use trimesh.sample.sample_surface (check util/misc/sample_scene_point_clouds). Place the processed data in appropriate directories:

data/sdf_008/<dataset> or data/sdf_016/<dataset> for low-res inputs

data/pc_20K/<dataset> for point clouds inputs

data/sdf_064/<dataset> for targets

Training the Retrieval Network


Make sure that CUDA_HOME variable is set. To train retrieval networks use the following command:

python trainer/train_retrieval.py --config config/<config> --val_check_interval 5 --experiment retrieval --wandb_main --sanity_steps 1

We provide some sample configurations for retrieval.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/retrieval_008_064.yaml
  • config/super_resolution/3DFront/retrieval_008_064.yaml
  • config/super_resolution/Matterport3D/retrieval_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/retrieval_128_064.yaml
  • config/surface_reconstruction/3DFront/retrieval_128_064.yaml
  • config/surface_reconstruction/Matterport3D/retrieval_128_064.yaml

Once trained, create the retrievals for train/validation set using the following commands:

python util/retrieval.py  --mode map --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config>
python util/retrieval.py --mode compose --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config> 

Training the Refinement Network


Use the following command to train the refinement network

python trainer/train_refinement.py --config <config> --val_check_interval 5 --experiment refinement --sanity_steps 1 --wandb_main --retrieval_ckpt <retrieval_ckpt>

Again, sample configurations for refinement are provided in the config directory.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/refinement_008_064.yaml
  • config/super_resolution/3DFront/refinement_008_064.yaml
  • config/super_resolution/Matterport3D/refinement_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml
  • config/surface_reconstruction/3DFront/refinement_128_064.yaml
  • config/surface_reconstruction/Matterport3D/refinement_128_064.yaml

Visualizations and Logs


Visualizations and checkpoints are dumped in the runs/<experiment> directory. Logs are uploaded to the user's Weights&Biases dashboard.

Processed Data & Models (ShapeNet)


Download processed data for ShapeNetV2 dataset using the following command

bash data/download_shapenet_processed.sh

This will populate the data/sdf_008, data/sdf_064, data/pc_20K, data/occupancy and data/size folders with processed ShapeNet data.

To download trained models on ShapeNetV2 use the following script

bash data/download_shapenet_models.sh

This downloads the checkpoints for retrieval and refinement for ShapeNet on both super-resolution and surface reconstruction tasks, plus the already computed retrievals. You can resume training these with the --resume flag in appropriate scripts (or inference with --sanity_steps flag). E.g. for resuming (and / or dumping inferences from data/splits/ShapeNetV2/main/val_vis.txt) use the following command

# super-resolution
python trainer/train_refinement.py --config config/super_resolution/ShapeNetV2/refinement_008_064.yaml  --sanity_steps -1 --resume runs/checkpoints/superres_refinement_ShapeNetV2.ckpt --retrieval_ckpt runs/07101959_superresolution_ShapeNetV2_upload/_ckpt_epoch=79.ckpt --current_phase 3 --max_epoch 161 --new_exp_for_resume
# surface-reconstruction
python trainer/train_refinement.py --config config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml  --sanity_steps -1 --resume runs/checkpoints/surfacerecon_refinement_ShapeNetV2.ckpt --retrieval_ckpt runs/07101959_surface_reconstruction_ShapeNetV2_upload/_ckpt_epoch=59.ckp --current_phase 3 --max_epoch 161 --new_exp_for_resume

Citation


If you find our work useful in your research, please consider citing:

@inproceedings{siddiqui2021retrievalfuse,
  title = {RetrievalFuse: Neural 3D Scene Reconstruction with a Database},
  author = {Siddiqui, Yawar and Thies, Justus and Ma, Fangchang and Shan, Qi and Nie{\ss}ner, Matthias and Dai, Angela},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  month = oct,
  year = {2021},
  doi = {},
  month_numeric = {10}
}

License


The code from this repository is released under the MIT license.

Author: nihalsid
Source Code: https://github.com/nihalsid/retrieval-fuse
#database 

What is GEEK

Buddha Community

RetrievalFuse: Neural 3D Scene Reconstruction with A Database
Ruth  Nabimanya

Ruth Nabimanya

1620633584

System Databases in SQL Server

Introduction

In SSMS, we many of may noticed System Databases under the Database Folder. But how many of us knows its purpose?. In this article lets discuss about the System Databases in SQL Server.

System Database

Fig. 1 System Databases

There are five system databases, these databases are created while installing SQL Server.

  • Master
  • Model
  • MSDB
  • Tempdb
  • Resource
Master
  • This database contains all the System level Information in SQL Server. The Information in form of Meta data.
  • Because of this master database, we are able to access the SQL Server (On premise SQL Server)
Model
  • This database is used as a template for new databases.
  • Whenever a new database is created, initially a copy of model database is what created as new database.
MSDB
  • This database is where a service called SQL Server Agent stores its data.
  • SQL server Agent is in charge of automation, which includes entities such as jobs, schedules, and alerts.
TempDB
  • The Tempdb is where SQL Server stores temporary data such as work tables, sort space, row versioning information and etc.
  • User can create their own version of temporary tables and those are stored in Tempdb.
  • But this database is destroyed and recreated every time when we restart the instance of SQL Server.
Resource
  • The resource database is a hidden, read only database that holds the definitions of all system objects.
  • When we query system object in a database, they appear to reside in the sys schema of the local database, but in actually their definitions reside in the resource db.

#sql server #master system database #model system database #msdb system database #sql server system databases #ssms #system database #system databases in sql server #tempdb system database

Django-allauth: A simple Boilerplate to Setup Authentication

Django-Authentication 

A simple Boilerplate to Setup Authentication using Django-allauth, with a custom template for login and registration using django-crispy-forms.

Getting Started

Prerequisites

  • Python 3.8.6 or higher

Project setup

# clone the repo
$ git clone https://github.com/yezz123/Django-Authentication

# move to the project folder
$ cd Django-Authentication

Creating virtual environment

  • Create a virtual environment for this project:
# creating pipenv environment for python 3
$ virtualenv venv

# activating the pipenv environment
$ cd venv/bin #windows environment you activate from Scripts folder

# if you have multiple python 3 versions installed then
$ source ./activate

Configured Enviromment

Environment variables

SECRET_KEY = #random string
DEBUG = #True or False
ALLOWED_HOSTS = #localhost
DATABASE_NAME = #database name (You can just use the default if you want to use SQLite)
DATABASE_USER = #database user for postgres
DATABASE_PASSWORD = #database password for postgres
DATABASE_HOST = #database host for postgres
DATABASE_PORT = #database port for postgres
ACCOUNT_EMAIL_VERIFICATION = #mandatory or optional
EMAIL_BACKEND = #email backend
EMAIL_HOST = #email host
EMAIL_HOST_PASSWORD = #email host password
EMAIL_USE_TLS = # if your email use tls
EMAIL_PORT = #email port

change all the environment variables in the .env.sample and don't forget to rename it to .env.

Run the project

After Setup the environment, you can run the project using the Makefile provided in the project folder.

help:
 @echo "Targets:"
 @echo "    make install" #install requirements
 @echo "    make makemigrations" #prepare migrations
 @echo "    make migrations" #migrate database
 @echo "    make createsuperuser" #create superuser
 @echo "    make run_server" #run the server
 @echo "    make lint" #lint the code using black
 @echo "    make test" #run the tests using Pytest

Preconfigured Packages

Includes preconfigured packages to kick start Django-Authentication by just setting appropriate configuration.

PackageUsage
django-allauthIntegrated set of Django applications addressing authentication, registration, account management as well as 3rd party (social) account authentication.
django-crispy-formsdjango-crispy-forms provides you with a crispy filter and {% crispy %} tag that will let you control the rendering behavior of your Django forms in a very elegant and DRY way.

Contributing

  • Django-Authentication is a simple project, so you can contribute to it by just adding your code to the project to improve it.
  • If you have any questions, please feel free to open an issue or create a pull request.

Download Details:
Author: yezz123
Source Code: https://github.com/yezz123/Django-Authentication
License: MIT License

#django #python 

Siphiwe  Nair

Siphiwe Nair

1625133780

SingleStore: The One Stop Shop For Everything Data

  • SingleStore works toward helping businesses embrace digital innovation by operationalising “all data through one platform for all the moments that matter”

The pandemic has brought a period of transformation across businesses globally, pushing data and analytics to the forefront of decision making. Starting from enabling advanced data-driven operations to creating intelligent workflows, enterprise leaders have been looking to transform every part of their organisation.

SingleStore is one of the leading companies in the world, offering a unified database to facilitate fast analytics for organisations looking to embrace diverse data and accelerate their innovations. It provides an SQL platform to help companies aggregate, manage, and use the vast trove of data distributed across silos in multiple clouds and on-premise environments.

**Your expertise needed! **Fill up our quick Survey

#featured #data analytics #data warehouse augmentation #database #database management #fast analytics #memsql #modern database #modernising data platforms #one stop shop for data #singlestore #singlestore data analytics #singlestore database #singlestore one stop shop for data #singlestore unified database #sql #sql database

Top-Notch 3d Design Services | 3d Printing Prototype Service

3D Design Service Provider

With the advancement in technology, many products have found a dire need to showcase their product virtually and to make the virtual experience as clear as actual a technology called 3D is used. The 3D technology allows a business to showcase their products in 3 dimensions virtually.

Want to develop an app that showcases anything in 3D?

WebClues Infotech with its expertise in mobile app development can seamlessly connect a technology that has the capability to change an industry with its integration in the mobile app. After successfully serving more than 950 projects WebClues Infotech is prepared with its highly skilled development team to serve you.

Want to know more about our 3D design app development?

Visit us at
https://www.webcluesinfotech.com/3d-design-services/

Visit: https://www.webcluesinfotech.com/3d-design-services/

Share your requirements https://www.webcluesinfotech.com/contact-us/

View Portfolio https://www.webcluesinfotech.com/portfolio/

#3d design service provide #3d design services #3d modeling design services #professional 3d design services #industrial & 3d product design services #3d web design & development company

Grace  Lesch

Grace Lesch

1639785600

RetrievalFuse: Neural 3D Scene Reconstruction with A Database

RetrievalFuse

Paper | Project Page | Video

RetrievalFuse: Neural 3D Scene Reconstruction with a Database 
Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai 
ICCV2021

This repository contains the code for the ICCV 2021 paper RetrievalFuse, a novel approach for 3D reconstruction from low resolution distance field grids and from point clouds.

In contrast to traditional generative learned models which encode the full generative process into a neural network and can struggle with maintaining local details at the scene level, we introduce a new method that directly leverages scene geometry from the training database.

File and Folders


Broad code structure is as follows:

File / FolderDescription
config/super_resolutionSuper-resolution experiment configs
config/surface_reconstructionSurface reconstruction experiment configs
config/baseDefaults for configurations
config/config_handler.pyConfig file parser
data/splitsTraining and validation splits for different datasets
dataset/scene.pySceneHandler class for managing access to scene data samples
dataset/patched_scene_dataset.pyPytorch dataset class for scene data
external/ChamferDistancePytorchFor calculating rough chamfer distance between prediction and target while training
model/attention.pyAttention, folding and unfolding modules
model/loss.pyLoss functions
model/refinement.pyRefinement network
model/retrieval.pyRetrieval network
model/unet.pyU-Net model used as a backbone in refinement network
runs/Checkpoint and visualizations for experiments dumped here
trainer/train_retrieval.pyLightning module for training retrieval network
trainer/train_refinement.pyLightning module for training refinement network
util/arguments.pyArgument parsing (additional arguments apart from those in config)
util/filesystem_logger.pyFor copying source code for each run in the experiment log directory
util/metrics.pyRough metrics for logging during training
util/mesh_metrics.pyFinal metrics on meshes
util/retrieval.pyScript to dump retrievals once retrieval networks have been trained; needed for training refinement.
util/visualizations.pyUtility scripts for visualizations

Further, the data/ directory has the following layout

data                    # root data directory
├── sdf_008             # low-res (8^3) distance fields
    ├── <dataset_0>     
        ├── <sample_0>
        ├── <sample_1>
        ├── <sample_2>
        ...
    ├── <dataset_1>
    ...
├── sdf_016             # low-res (16^3) distance fields
    ├── <dataset_0>
        ├── <sample_0>
        ├── <sample_1>
        ├── <sample_2>
        ...
    ├── <dataset_1>
    ...
├── sdf_064             # high-res (64^3) distance fields
    ├── <dataset_0>
            ├── <sample_0>
            ├── <sample_1>
            ├── <sample_2>
            ...
        ├── <dataset_1>
        ...
├── pc_20K              # point cloud inputs
    ├── <dataset_0>
        ├── <sample_0>
        ├── <sample_1>
        ├── <sample_2>
        ...
    ├── <dataset_1>
    ...
├── splits              # train/val splits
├── size                # data needed by SceneHandler class (autocreated on first run)
├── occupancy           # data needed by SceneHandler class (autocreated on first run)

Dependencies


Install the dependencies using pip

pip install -r requirements.txt

Be sure that you pull the ChamferDistancePytorch submodule in external.

Data Preparation


For ShapeNetV2 and Matterport, get the appropriate meshes from the datasets. For 3DFRONT get the 3DFUTURE meshes and 3DFRONT scripts. For getting 3DFRONT meshes use our fork of 3D-FRONT-ToolBox to create room meshes.

Once you have the meshes, use our fork of sdf-gen to create distance field low-res inputs and high-res targets. For creating point cloud inputs simply use trimesh.sample.sample_surface (check util/misc/sample_scene_point_clouds). Place the processed data in appropriate directories:

data/sdf_008/<dataset> or data/sdf_016/<dataset> for low-res inputs

data/pc_20K/<dataset> for point clouds inputs

data/sdf_064/<dataset> for targets

Training the Retrieval Network


Make sure that CUDA_HOME variable is set. To train retrieval networks use the following command:

python trainer/train_retrieval.py --config config/<config> --val_check_interval 5 --experiment retrieval --wandb_main --sanity_steps 1

We provide some sample configurations for retrieval.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/retrieval_008_064.yaml
  • config/super_resolution/3DFront/retrieval_008_064.yaml
  • config/super_resolution/Matterport3D/retrieval_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/retrieval_128_064.yaml
  • config/surface_reconstruction/3DFront/retrieval_128_064.yaml
  • config/surface_reconstruction/Matterport3D/retrieval_128_064.yaml

Once trained, create the retrievals for train/validation set using the following commands:

python util/retrieval.py  --mode map --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config>
python util/retrieval.py --mode compose --retrieval_ckpt <trained_retrieval_ckpt> --config <retrieval_config> 

Training the Refinement Network


Use the following command to train the refinement network

python trainer/train_refinement.py --config <config> --val_check_interval 5 --experiment refinement --sanity_steps 1 --wandb_main --retrieval_ckpt <retrieval_ckpt>

Again, sample configurations for refinement are provided in the config directory.

For super-resolution, e.g.

  • config/super_resolution/ShapeNetV2/refinement_008_064.yaml
  • config/super_resolution/3DFront/refinement_008_064.yaml
  • config/super_resolution/Matterport3D/refinement_016_064.yaml

For surface-reconstruction, e.g.

  • config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml
  • config/surface_reconstruction/3DFront/refinement_128_064.yaml
  • config/surface_reconstruction/Matterport3D/refinement_128_064.yaml

Visualizations and Logs


Visualizations and checkpoints are dumped in the runs/<experiment> directory. Logs are uploaded to the user's Weights&Biases dashboard.

Processed Data & Models (ShapeNet)


Download processed data for ShapeNetV2 dataset using the following command

bash data/download_shapenet_processed.sh

This will populate the data/sdf_008, data/sdf_064, data/pc_20K, data/occupancy and data/size folders with processed ShapeNet data.

To download trained models on ShapeNetV2 use the following script

bash data/download_shapenet_models.sh

This downloads the checkpoints for retrieval and refinement for ShapeNet on both super-resolution and surface reconstruction tasks, plus the already computed retrievals. You can resume training these with the --resume flag in appropriate scripts (or inference with --sanity_steps flag). E.g. for resuming (and / or dumping inferences from data/splits/ShapeNetV2/main/val_vis.txt) use the following command

# super-resolution
python trainer/train_refinement.py --config config/super_resolution/ShapeNetV2/refinement_008_064.yaml  --sanity_steps -1 --resume runs/checkpoints/superres_refinement_ShapeNetV2.ckpt --retrieval_ckpt runs/07101959_superresolution_ShapeNetV2_upload/_ckpt_epoch=79.ckpt --current_phase 3 --max_epoch 161 --new_exp_for_resume
# surface-reconstruction
python trainer/train_refinement.py --config config/surface_reconstruction/ShapeNetV2/refinement_128_064.yaml  --sanity_steps -1 --resume runs/checkpoints/surfacerecon_refinement_ShapeNetV2.ckpt --retrieval_ckpt runs/07101959_surface_reconstruction_ShapeNetV2_upload/_ckpt_epoch=59.ckp --current_phase 3 --max_epoch 161 --new_exp_for_resume

Citation


If you find our work useful in your research, please consider citing:

@inproceedings{siddiqui2021retrievalfuse,
  title = {RetrievalFuse: Neural 3D Scene Reconstruction with a Database},
  author = {Siddiqui, Yawar and Thies, Justus and Ma, Fangchang and Shan, Qi and Nie{\ss}ner, Matthias and Dai, Angela},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  month = oct,
  year = {2021},
  doi = {},
  month_numeric = {10}
}

License


The code from this repository is released under the MIT license.

Author: nihalsid
Source Code: https://github.com/nihalsid/retrieval-fuse
#database