Reid  Rohan

Reid Rohan

1659490380

inputlessdb-Myotis: Data Exploration PIPELINE for Legal NLP

Data Exploration PIPELINE for Legal NLP!

  • Empowered AI SAAS for Legal NLP.
  • Quickly upload, analyze and interact with data.
  • Extraction parsing and processing informations/attributes from text.
  • It has important applications in networking, software engineering, and in visual interfaces for other technical domains.
  • Multi Tenant / Multi Account.
  • Two Factor Authentication
  • Administration Dashboard
  • Indexing and Searching Documents
  • Document Parser / Analyzer: doc, docx, pdf, exel , txt, ecc.
  • System Chat Communicator
  • Javascript Code Obfuscator.
  • Fully reprogrammable.

Introduction:

Inputless Myotis is a data exploration tool that make easier to navigate through italian LEGAL information documents extracted from raw text!

This project has been deployed by Giovanni Errico - Leonardo Trisolini since 2020 to 2022 and we wish it will be continued by the community.

Features / sssshots :)

Main Screen

main screen

account registrations

Dashboard

Upload - Graph Creator - Content Store

Dynamic graph visualizer layouts

Inteface

Circular Layout

Dagre Layout

dagre

Sparse Layout

System Chat Communicator

Table View

Extract

Print

Organize reprogrammable

Download

Notes:

No AI model is included in this software !

It was originally build to support Italian Legal System so feel free to customize and adapt tags / annotations to your needs.

At the end you have something to enjoy as i already did !

Requirements:

  • Docker
  • Docker compose
  • Doccano Annotator
  • Spacy NLP

The Pipeline includes:

  • Grafana (analytics web application)
  • Nginx ( reverse proxy )
  • Celery ( task worker )
  • Redis ( in-memory database )
  • Postgres ( RDBMS database )
  • Postgres - Exporter ( metrics )
  • PG Backups ( postgress backup )
  • Prometheus ( metrics )
  • Clamav (antivirus)

Deploy:

Build:

docker-compose --compatibility up --build

Staging:

docker-compose -f docker-compose.staging.yml --compatibility up --build

Production:

docker-compose -f docker-compose.prod.yml --compatibility up --build

Stop:

docker-compose --compatibility down -v

Clear images and containers:

docker system prune -af

Deploy postgress ssl certificates:

Certification authority:

Generate RootCA.pem, RootCA.key & RootCA.crt:

openssl req -x509 -nodes -new -sha256 -days 1024 -newkey rsa:2048 -keyout RootCA.key -out RootCA.pem -subj "/C=US/CN=Example-Root-CA"
openssl x509 -outform pem -in RootCA.pem -out RootCA.crt

Domain name certificate:

First, create a file `domains.ext` that lists all your local domains

authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = fake1.local
DNS.3 = fake2.local


Generate localhost.key, localhost.csr, and localhost.crt:

openssl req -new -nodes -newkey rsa:2048 -keyout localhost.key -out localhost.csr -subj "/C=US/ST=YourState/L=YourCity/O=Example-Certificates/CN=localhost.local"
openssl x509 -req -sha256 -days 1024 -in localhost.csr -CA RootCA.pem -CAkey RootCA.key -CAcreateserial -extfile domains.ext -out localhost.crt

Django schema migrations after build:

docker exec -it Web bash

./manage.py makemigrations

./manage.py migrate

Suggestions:

Please Note:

This project is under constant developement .. Fill free to contribute to it!

To fully deploy the production enviroment you need the server supports HTTPS with nginx-proxy-letsencrypt, so in this env you'll uncomment tags in docker-compose.prod.yml ,than rename the nginx-production folder to nginx or make it point directly in the following configuration.

In the enviroment production please modify access permissions to the following folder:

chown -r user:user docker-data/appdata

# nginx-proxy:
  #   container_name: nginx-proxy
  #   build: nginx
  #   restart: always
  #   ports:
  #     - 443:443
  #     - 80:80
  #   volumes:
  #     - certs:/etc/nginx/certs
  #     - html:/usr/share/nginx/html
  #     - vhost:/etc/nginx/vhost.d
  #     - /var/run/docker.sock:/tmp/docker.sock:ro
  #     - ./logs:/var/log/nginx
  #     - static_volume:/app/staticfiles
  #   depends_on:
  #     - web
  #   networks:
  #     - main

  # nginx-proxy-letsencrypt:
  #   image: jrcs/letsencrypt-nginx-proxy-companion
  #   env_file:
  #     - ./.env.prod.proxy-companion
  #   volumes:
  #     - /var/run/docker.sock:/var/run/docker.sock:ro
  #     - certs:/etc/nginx/certs
  #     - html:/usr/share/nginx/html
  #     - vhost:/etc/nginx/vhost.d
  #     - acme:/etc/acme.sh
  #   depends_on:
  #     - nginx-proxy
  #   networks:
  #     - main

TODO:

 Unit Test

 Integration Test

 Smoke Test

 Add Terraform script

 Add pipeline CI/CD ( JENKINS )

Known Errors:

Author: inputlessdb
Source Code: https://github.com/inputlessdb/inputlessdb-Myotis 
License: Apache-2.0 license

#javascript #data #nlp 

What is GEEK

Buddha Community

inputlessdb-Myotis: Data Exploration PIPELINE for Legal NLP
 iOS App Dev

iOS App Dev

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Cyrus  Kreiger

Cyrus Kreiger

1618039260

How Has COVID-19 Impacted Data Science?

The COVID-19 pandemic disrupted supply chains and brought economies around the world to a standstill. In turn, businesses need access to accurate, timely data more than ever before. As a result, the demand for data analytics is skyrocketing as businesses try to navigate an uncertain future. However, the sudden surge in demand comes with its own set of challenges.

Here is how the COVID-19 pandemic is affecting the data industry and how enterprises can prepare for the data challenges to come in 2021 and beyond.

#big data #data #data analysis #data security #data integration #etl #data warehouse #data breach #elt

Reid  Rohan

Reid Rohan

1659490380

inputlessdb-Myotis: Data Exploration PIPELINE for Legal NLP

Data Exploration PIPELINE for Legal NLP!

  • Empowered AI SAAS for Legal NLP.
  • Quickly upload, analyze and interact with data.
  • Extraction parsing and processing informations/attributes from text.
  • It has important applications in networking, software engineering, and in visual interfaces for other technical domains.
  • Multi Tenant / Multi Account.
  • Two Factor Authentication
  • Administration Dashboard
  • Indexing and Searching Documents
  • Document Parser / Analyzer: doc, docx, pdf, exel , txt, ecc.
  • System Chat Communicator
  • Javascript Code Obfuscator.
  • Fully reprogrammable.

Introduction:

Inputless Myotis is a data exploration tool that make easier to navigate through italian LEGAL information documents extracted from raw text!

This project has been deployed by Giovanni Errico - Leonardo Trisolini since 2020 to 2022 and we wish it will be continued by the community.

Features / sssshots :)

Main Screen

main screen

account registrations

Dashboard

Upload - Graph Creator - Content Store

Dynamic graph visualizer layouts

Inteface

Circular Layout

Dagre Layout

dagre

Sparse Layout

System Chat Communicator

Table View

Extract

Print

Organize reprogrammable

Download

Notes:

No AI model is included in this software !

It was originally build to support Italian Legal System so feel free to customize and adapt tags / annotations to your needs.

At the end you have something to enjoy as i already did !

Requirements:

  • Docker
  • Docker compose
  • Doccano Annotator
  • Spacy NLP

The Pipeline includes:

  • Grafana (analytics web application)
  • Nginx ( reverse proxy )
  • Celery ( task worker )
  • Redis ( in-memory database )
  • Postgres ( RDBMS database )
  • Postgres - Exporter ( metrics )
  • PG Backups ( postgress backup )
  • Prometheus ( metrics )
  • Clamav (antivirus)

Deploy:

Build:

docker-compose --compatibility up --build

Staging:

docker-compose -f docker-compose.staging.yml --compatibility up --build

Production:

docker-compose -f docker-compose.prod.yml --compatibility up --build

Stop:

docker-compose --compatibility down -v

Clear images and containers:

docker system prune -af

Deploy postgress ssl certificates:

Certification authority:

Generate RootCA.pem, RootCA.key & RootCA.crt:

openssl req -x509 -nodes -new -sha256 -days 1024 -newkey rsa:2048 -keyout RootCA.key -out RootCA.pem -subj "/C=US/CN=Example-Root-CA"
openssl x509 -outform pem -in RootCA.pem -out RootCA.crt

Domain name certificate:

First, create a file `domains.ext` that lists all your local domains

authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = fake1.local
DNS.3 = fake2.local


Generate localhost.key, localhost.csr, and localhost.crt:

openssl req -new -nodes -newkey rsa:2048 -keyout localhost.key -out localhost.csr -subj "/C=US/ST=YourState/L=YourCity/O=Example-Certificates/CN=localhost.local"
openssl x509 -req -sha256 -days 1024 -in localhost.csr -CA RootCA.pem -CAkey RootCA.key -CAcreateserial -extfile domains.ext -out localhost.crt

Django schema migrations after build:

docker exec -it Web bash

./manage.py makemigrations

./manage.py migrate

Suggestions:

Please Note:

This project is under constant developement .. Fill free to contribute to it!

To fully deploy the production enviroment you need the server supports HTTPS with nginx-proxy-letsencrypt, so in this env you'll uncomment tags in docker-compose.prod.yml ,than rename the nginx-production folder to nginx or make it point directly in the following configuration.

In the enviroment production please modify access permissions to the following folder:

chown -r user:user docker-data/appdata

# nginx-proxy:
  #   container_name: nginx-proxy
  #   build: nginx
  #   restart: always
  #   ports:
  #     - 443:443
  #     - 80:80
  #   volumes:
  #     - certs:/etc/nginx/certs
  #     - html:/usr/share/nginx/html
  #     - vhost:/etc/nginx/vhost.d
  #     - /var/run/docker.sock:/tmp/docker.sock:ro
  #     - ./logs:/var/log/nginx
  #     - static_volume:/app/staticfiles
  #   depends_on:
  #     - web
  #   networks:
  #     - main

  # nginx-proxy-letsencrypt:
  #   image: jrcs/letsencrypt-nginx-proxy-companion
  #   env_file:
  #     - ./.env.prod.proxy-companion
  #   volumes:
  #     - /var/run/docker.sock:/var/run/docker.sock:ro
  #     - certs:/etc/nginx/certs
  #     - html:/usr/share/nginx/html
  #     - vhost:/etc/nginx/vhost.d
  #     - acme:/etc/acme.sh
  #   depends_on:
  #     - nginx-proxy
  #   networks:
  #     - main

TODO:

 Unit Test

 Integration Test

 Smoke Test

 Add Terraform script

 Add pipeline CI/CD ( JENKINS )

Known Errors:

Author: inputlessdb
Source Code: https://github.com/inputlessdb/inputlessdb-Myotis 
License: Apache-2.0 license

#javascript #data #nlp 

Macey  Kling

Macey Kling

1597579680

Applications Of Data Science On 3D Imagery Data

CVDC 2020, the Computer Vision conference of the year, is scheduled for 13th and 14th of August to bring together the leading experts on Computer Vision from around the world. Organised by the Association of Data Scientists (ADaSCi), the premier global professional body of data science and machine learning professionals, it is a first-of-its-kind virtual conference on Computer Vision.

The second day of the conference started with quite an informative talk on the current pandemic situation. Speaking of talks, the second session “Application of Data Science Algorithms on 3D Imagery Data” was presented by Ramana M, who is the Principal Data Scientist in Analytics at Cyient Ltd.

Ramana talked about one of the most important assets of organisations, data and how the digital world is moving from using 2D data to 3D data for highly accurate information along with realistic user experiences.

The agenda of the talk included an introduction to 3D data, its applications and case studies, 3D data alignment, 3D data for object detection and two general case studies, which are-

  • Industrial metrology for quality assurance.
  • 3d object detection and its volumetric analysis.

This talk discussed the recent advances in 3D data processing, feature extraction methods, object type detection, object segmentation, and object measurements in different body cross-sections. It also covered the 3D imagery concepts, the various algorithms for faster data processing on the GPU environment, and the application of deep learning techniques for object detection and segmentation.

#developers corner #3d data #3d data alignment #applications of data science on 3d imagery data #computer vision #cvdc 2020 #deep learning techniques for 3d data #mesh data #point cloud data #uav data