Michio JP

Michio JP

1679371376

How to Visualize Neural Networks in Python

A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. In this sense, neural networks refer to systems of neurons, either organic or artificial in nature.

Neural networks can adapt to changing input; so the network generates the best possible result without needing to redesign the output criteria. The concept of neural networks, which has its roots in artificial intelligence, is swiftly gaining popularity in the development of trading systems.

There are several tools and package that we can use to visualize neural networks. In this tutorial we will talk about 4 of them. These tools or packages include

  • 1: Plot_model from TensorFlow
  • 2: ANN-Visualizer
  • 3: Netron
  • 4: Tensorboard

1: Tensorflow/Keras Plot_model

Keras/Tensorflow comes with a native function to help visualize the components and the structure of your artificial neural network. The plot_model() function can be used to visualize any keras-related or tensorflow generated neural network. This will give you a flow chart of the input, the layers and the output for your artificial neural network. The plot_model function takes as input the model and then the filename you want to save your plot as via the ‘to_file‘ argument

# Load utils 
from tensorflow.keras.utils import plot_model

#  Build your model
model = Sequential()
.....

# Visualize the model
plot_model(model,to_file='my_ann_model.png',show_shapes=False)

# Visualize the Model showing the input and output shapes
plot_model(model,to_file='my_ann_model.png',show_shapes=True)

2: ANN-Visualizer

This is another alternative to visualizing the component of a neural network, however there can be some challenges when using this tool. Howbeit it is quite simple to use.

To install it you can use pip via

pip install ann-visualizer

In order to use Ann-Visualizer you can do the following

from ann_visualizer.visualize import ann_viz
import graphviz

# Usage
ann_viz(model,filename='my_ann_model.gv',title='Artificial Neuron')
graph_file = graphviz.Source.from_file('my_ann_model.gv')
graph_file

3: Netron

Netron is one of the alternative. It comes as a standard alone desktop software that is cross-platform compatible since it was built with electron and react. Moreover there is also a free online service by the same team that made netron for visualizing the components of an ANN.

You can install netron as follows

# For Python
pip install netron
# For Linux
snap install netron
# For Mac
brew install netron

In order to use netron, you will have to save your neural network model as h5 or any other format and then upload it to netron app or the online service. That is it

4: Tensorboard

The final but not the least important is tensorfboard from tensorflow. This library has several features beside visualizing the components of a neural network. It offers tons of features.

To utilize it , you will have to install it via pip as below

pip install tensorboard

This is the end of the tutorial which are 4 ways to visualize neural nets in Python we would like to introduce to you.

Happy Coding !!!

#python #neuralnetwork #machinelearning #deeplearning  

How to Visualize Neural Networks in Python
Royce  Reinger

Royce Reinger

1678514520

A Cross-platform Python Library for Differentiable Programming

Pennylane

PennyLane is a cross-platform Python library for differentiable programming of quantum computers.

Train a quantum computer the same way as a neural network.  

Key Features

Machine learning on quantum hardware. Connect to quantum hardware using PyTorch, TensorFlow, JAX, Keras, or NumPy. Build rich and flexible hybrid quantum-classical models.

Device-independent. Run the same quantum circuit on different quantum backends. Install plugins to access even more devices, including Strawberry Fields, Amazon Braket, IBM Q, Google Cirq, Rigetti Forest, Qulacs, Pasqal, Honeywell, and more.

Follow the gradient. Hardware-friendly automatic differentiation of quantum circuits.

Batteries included. Built-in tools for quantum machine learning, optimization, and quantum chemistry. Rapidly prototype using built-in quantum simulators with backpropagation support.

Installation

PennyLane requires Python version 3.8 and above. Installation of PennyLane, as well as all dependencies, can be done using pip:

python -m pip install pennylane

Docker support

Docker support exists for building using CPU and GPU (Nvidia CUDA 11.1+) images. See a more detailed description here.

Getting started

For an introduction to quantum machine learning, guides and resources are available on PennyLane's quantum machine learning hub:

You can also check out our documentation for quickstart guides to using PennyLane, and detailed developer guides on how to write your own PennyLane-compatible quantum device.

Tutorials and demonstrations

Take a deeper dive into quantum machine learning by exploring cutting-edge algorithms on our demonstrations page.

 

All demonstrations are fully executable, and can be downloaded as Jupyter notebooks and Python scripts.

If you would like to contribute your own demo, see our demo submission guide.

Videos

Seeing is believing! Check out our videos to learn about PennyLane, quantum computing concepts, and more.

 

Contributing to PennyLane

We welcome contributions—simply fork the PennyLane repository, and then make a pull request containing your contribution. All contributors to PennyLane will be listed as authors on the releases. All users who contribute significantly to the code (new plugins, new functionality, etc.) will be listed on the PennyLane arXiv paper.

We also encourage bug reports, suggestions for new features and enhancements, and even links to cool projects or applications built on PennyLane.

See our contributions page and our developer hub for more details.

Support

If you are having issues, please let us know by posting the issue on our GitHub issue tracker.

We also have a PennyLane discussion forum—come join the community and chat with the PennyLane team.

Note that we are committed to providing a friendly, safe, and welcoming environment for all. Please read and respect the Code of Conduct.


Download Details:

Author: PennyLaneA
Source Code: https://github.com/PennyLaneAI/pennylane 
License: Apache-2.0 license

#machinelearning #python #deeplearning #neuralnetwork #tensorflow 

A Cross-platform Python Library for Differentiable Programming
Royce  Reinger

Royce Reinger

1677552600

Netron: Visualizer for Neural Network, Deep Learning, & ML Models

 Netron

Netron is a viewer for neural network, deep learning and machine learning models.

Netron supports ONNX, TensorFlow Lite, Caffe, Keras, Darknet, PaddlePaddle, ncnn, MNN, Core ML, RKNN, MXNet, MindSpore Lite, TNN, Barracuda, Tengine, CNTK, TensorFlow.js, Caffe2 and UFF.

Netron has experimental support for PyTorch, TensorFlow, TorchScript, OpenVINO, Torch, Vitis AI, kmodel, Arm NN, BigDL, Chainer, Deeplearning4j, MediaPipe, MegEngine, ML.NET and scikit-learn.

Install

macOS: Download the .dmg file or run brew install --cask netron

Linux: Download the .AppImage file or run snap install netron

Windows: Download the .exe installer or run winget install -s winget netron

Browser: Start the browser version.

Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').

Models

Sample model files to download or open using the browser version:


Download Details:

Author: lutzroeder
Source Code: https://github.com/lutzroeder/Netron 
License: MIT license

#machinelearning #python #ai #deeplearning #neuralnetwork 

Netron: Visualizer for Neural Network, Deep Learning, & ML Models
Royce  Reinger

Royce Reinger

1676713686

WTTE-RNN A Framework for Churn and Time to Event Prediction

WTTE-RNN

Weibull Time To Event Recurrent Neural Network

A less hacky machine-learning framework for churn- and time to event prediction. Forecasting problems as diverse as server monitoring to earthquake- and churn-prediction can be posed as the problem of predicting the time to an event. WTTE-RNN is an algorithm and a philosophy about how this should be done.

Installation

Python

Check out README for Python package.

If this seems like overkill, the basic implementation can be found inlined as a jupyter notebook

Ideas and Basics

You have data consisting of many time-series of events and want to use historic data to predict the time to the next event (TTE). If you haven't observed the last event yet we've only observed a minimum bound of the TTE to train on. This results in what's called censored data (in red):

Censored data

Instead of predicting the TTE itself the trick is to let your machine learning model output the parameters of a distribution. This could be anything but we like the Weibull distribution because it's awesome. The machine learning algorithm could be anything gradient-based but we like RNNs because they are awesome too.

example WTTE-RNN architecture

The next step is to train the algo of choice with a special log-loss that can work with censored data. The intuition behind it is that we want to assign high probability at the next event or low probability where there wasn't any events (for censored data):

WTTE-RNN prediction over a timeline

What we get is a pretty neat prediction about the distribution of the TTE in each step (here for a single event):

WTTE-RNN prediction

A neat sideresult is that the predicted params is a 2-d embedding that can be used to visualize and group predictions about how soon (alpha) and how sure (beta). Here by stacking timelines of predicted alpha (left) and beta (right):

WTTE-RNN alphabeta.png

Warnings

There's alot of mathematical theory basically justifying us to use this nice loss function in certain situations:

loss-equation

So for censored data it only rewards pushing the distribution up, beyond the point of censoring. To get this to work you need the censoring mechanism to be independent from your feature data. If your features contains information about the point of censoring your algorithm will learn to cheat by predicting far away based on probability of censoring instead of tte. A type of overfitting/artifact learning. Global features can have this effect if not properly treated.

Status and Roadmap

The project is under development. The goal is to create a forkable and easily deployable model framework. WTTE is the algorithm but the whole project aims to be more. It's a visual philosophy and an opinionated idea about how churn-monitoring and reporting can be made beautiful and easy.

Pull-requests, recommendations, comments and contributions very welcome.

What's in the repository

  • Transformations
    • Data pipeline transformations (pandas.DataFrame of expected format to numpy)
    • Time to event and censoring indicator calculations
  • Weibull functions (cdf, pdf, quantile, mean etc)
  • Objective functions:
    • Tensorflow
    • Keras (Tensorflow + Theano)
  • Keras helpers
    • Weibull output layers
    • Loss functions
    • Callbacks
  • ~~ Lots of example-implementations ~~

Citation

@MastersThesis{martinsson:Thesis:2016,
    author = {Egil Martinsson},
    title  = {{WTTE-RNN : Weibull Time To Event Recurrent Neural Network}},
    school = {Chalmers University Of Technology},
    year   = {2016},
}

Contributing

Contributions/PR/Comments etc are very welcome! Post an issue if you have any questions and feel free to reach out to egil.martinsson[at]gmail.com.

Contributors (by order of commit)

  • Egil Martinsson
  • Dayne Batten (made the first keras-implementation)
  • Clay Kim
  • Jannik Hoffjann
  • Daniel Klevebring
  • Jeongkyu Shin
  • Joongi Kim
  • Jonghyun Park


Download Details:

Author: Ragulpr
Source Code: https://github.com/ragulpr/wtte-rnn/ 
License: MIT license

#machinelearning #python #neuralnetwork #tensorflow 

WTTE-RNN A Framework for Churn and Time to Event Prediction
Royce  Reinger

Royce Reinger

1676034905

Muzero-general: MuZero

MuZero General

A commented and documented implementation of MuZero based on the Google DeepMind paper (Schrittwieser et al., Nov 2019) and the associated pseudocode. It is designed to be easily adaptable for every games or reinforcement learning environments (like gym). You only need to add a game file with the hyperparameters and the game class. Please refer to the documentation and the example. This implementation is primarily for educational purpose.
Explanatory video of MuZero

MuZero is a state of the art RL algorithm for board games (Chess, Go, ...) and Atari games. It is the successor to AlphaZero but without any knowledge of the environment underlying dynamics. MuZero learns a model of the environment and uses an internal representation that contains only the useful information for predicting the reward, value, policy and transitions. MuZero is also close to Value prediction networks. See How it works.

Features

  •  Residual Network and Fully connected network in PyTorch
  •  Multi-Threaded/Asynchronous/Cluster with Ray
  •  Multi GPU support for the training and the selfplay
  •  TensorBoard real-time monitoring
  •  Model weights automatically saved at checkpoints
  •  Single and two player mode
  •  Commented and documented
  •  Easily adaptable for new games
  •  Examples of board games, Gym and Atari games (See list of implemented games)
  •  Pretrained weights available
  •  Windows support (Experimental / Workaround: Use the notebook in Google Colab)

Further improvements

Here is a list of features which could be interesting to add but which are not in MuZero's paper. We are open to contributions and other ideas.

Demo

All performances are tracked and displayed in real time in TensorBoard :

cartpole training summary

Testing Lunar Lander :

lunarlander training preview

Games already implemented

  • Cartpole (Tested with the fully connected network)
  • Lunar Lander (Tested in deterministic mode with the fully connected network)
  • Gridworld (Tested with the fully connected network)
  • Tic-tac-toe (Tested with the fully connected network and the residual network)
  • Connect4 (Slightly tested with the residual network)
  • Gomoku
  • Twenty-One / Blackjack (Tested with the residual network)
  • Atari Breakout

Tests are done on Ubuntu with 16 GB RAM / Intel i7 / GTX 1050Ti Max-Q. We make sure to obtain a progression and a level which ensures that it has learned. But we do not systematically reach a human level. For certain environments, we notice a regression after a certain time. The proposed configurations are certainly not optimal and we do not focus for now on the optimization of hyperparameters. Any help is welcome.

Code structure

code structure

Network summary:

Getting started

Installation

git clone https://github.com/werner-duvaud/muzero-general.git
cd muzero-general

pip install -r requirements.lock

Run

python muzero.py

To visualize the training results, run in a new terminal:

tensorboard --logdir ./results

Config

You can adapt the configurations of each game by editing the MuZeroConfig class of the respective file in the games folder.

Related work

  • EfficientZero (Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, Yang Gao)
  • Sampled MuZero (Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Mohammadamin Barekatain, Simon Schmitt, David Silver)

Authors

Please use this bibtex if you want to cite this repository (master branch) in your publications:

@misc{muzero-general,
  author       = {Werner Duvaud, Aurèle Hainaut},
  title        = {MuZero General: Open Reimplementation of MuZero},
  year         = {2019},
  publisher    = {GitHub},
  journal      = {GitHub repository},
  howpublished = {\url{https://github.com/werner-duvaud/muzero-general}},
}

Getting involved

Download Details:

Author: Werner-duvaud
Source Code: https://github.com/werner-duvaud/muzero-general 
License: MIT license

#machinelearning #python #deeplearning #neuralnetwork 

Muzero-general: MuZero
Royce  Reinger

Royce Reinger

1675326180

Plug and Play Modules to Optimize The Performances Of Your AI Systems

nebullvm

Plug and play modules to optimize the performances of your AI systems


Nebullvm is an ecosystem of plug and play modules to optimize the performances of your AI systems. The optimization modules are stack-agnostic and work with any library. They are designed to be easily integrated into your system, providing a quick and seamless boost to its performance. Simply plug and play to start realizing the benefits of optimized performance right away.

If you like the idea, give us a star to show your support for the project ⭐

What can this help with?

There are multiple modules we actually provide to boost the performances of your AI systems:

✅ Speedster: Automatically apply the best set of SOTA optimization techniques to achieve the maximum inference speed-up on your hardware.

Nos: Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas - Effortless optimization at its finest!

✅ OpenAlphaTensor: Increase the computational performances of an AI model with custom-generated matrix multiplication algorithm fine-tuned for your specific hardware.

✅ Forward-Forward: The Forward Forward algorithm is a method for training deep neural networks that replaces the backpropagation forward and backward passes with two forward passes.

Next modules and roadmap

We are actively working on incorporating the following modules, as requested by members of our community, in upcoming releases:

  •  Promptify: Effortlessly personalize large APIs generative models from OpenAI, Cohere, HF to your specific context and requirements.
  •  CloudSurfer: Automatically discover the optimal cloud configuration and hardware on AWS, GCP and Azure to run your AI models.
  •  OptiMate: Interactive tool guiding savvy users in achieving the best inference performance out of a given model / hardware setup.
  •  TrainingSim: Easily simulate the training of large AI models on a distributed infrastructure to predict training behaviours without actual implementation.

Contributing

As an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features, improved infrastructure, and better documentation. If you're interested in contributing, please see the linked page for more information on how to get involved.


Join the community | Contribute to the library


Documentation: docs.nebuly.com/


Download Details:

Author: nebuly-ai
Source Code: https://github.com/nebuly-ai/nebullvm 
License: Apache-2.0 license

#machinelearning #deeplearning #neuralnetwork #tensorflow 

Plug and Play Modules to Optimize The Performances Of Your AI Systems
Royce  Reinger

Royce Reinger

1675217580

Python package for AutoML on Tabular Data with Feature Engineering

MLJAR Automated Machine Learning for Humans


Automated Machine Learning

The mljar-supervised is an Automated Machine Learning Python package that works with tabular data. It is designed to save time for a data scientist. It abstracts the common way to preprocess the data, construct the machine learning models, and perform hyper-parameters tuning to find the best model :trophy:. It is no black-box as you can see exactly how the ML pipeline is constructed (with a detailed Markdown report for each ML model).

The mljar-supervised will help you with:

  • explaining and understanding your data (Automatic Exploratory Data Analysis),
  • trying many different machine learning models (Algorithm Selection and Hyper-Parameters tuning),
  • creating Markdown reports from analysis with details about all models (Automatic-Documentation),
  • saving, re-running and loading the analysis and ML models.

It has four built-in modes of work:

  • Explain mode, which is ideal for explaining and understanding the data, with many data explanations, like decision trees visualization, linear models coefficients display, permutation importances and SHAP explanations of data,
  • Perform for building ML pipelines to use in production,
  • Compete mode that trains highly-tuned ML models with ensembling and stacking, with a purpose to use in ML competitions.
  • Optuna mode that can be used to search for highly-tuned ML models, should be used when the performance is the most important, and computation time is not limited (it is available from version 0.10.0)

Of course, you can further customize the details of each mode to meet the requirements.

Excel Add-in

We are working on Excel Add-in for Machine Learning. You can train ML models without leaving WorkSheet. Model training is done locally on your machine (no cloud). You can train models with MLJAR AutoML or single models (manual hyperparameter selection).

Interested? Please fill out the form, and we will inform you when it will be available.

What's good in it?

  • It is using many algorithms: Baseline, Linear, Random Forest, Extra Trees, LightGBM, Xgboost, CatBoost, Neural Networks, and Nearest Neighbors.
  • It can compute Ensemble based on greedy algorithm from Caruana paper.
  • It can stack models to build level 2 ensemble (available in Compete mode or after setting stack_models parameter).
  • It can do features preprocessing, like: missing values imputation and converting categoricals. What is more, it can also handle target values preprocessing.
  • It can do advanced features engineering, like: Golden Features, Features Selection, Text and Time Transformations.
  • It can tune hyper-parameters with not-so-random-search algorithm (random-search over defined set of values) and hill climbing to fine-tune final models.
  • It can compute the Baseline for your data. That you will know if you need Machine Learning or not!
  • It has extensive explanations. This package is training simple Decision Trees with max_depth <= 5, so you can easily visualize them with amazing dtreeviz to better understand your data.
  • The mljar-supervised is using simple linear regression and include its coefficients in the summary report, so you can check which features are used the most in the linear model.
  • It cares about explainability of models: for every algorithm, the feature importance is computed based on permutation. Additionally, for every algorithm the SHAP explanations are computed: feature importance, dependence plots, and decision plots (explanations can be switched off with explain_level parameter).
  • There is automatic documentation for every ML experiment run with AutoML. The mljar-supervised creates markdown reports from AutoML training full of ML details, metrics and charts.

Automatic Documentation

The AutoML Report

The report from running AutoML will contain the table with infomation about each model score and time needed to train the model. For each model there is a link, which you can click to see model's details. The performance of all ML models is presented as scatter and box plots so you can visually inspect which algorithms perform the best 🏆.

AutoML leaderboard

The Decision Tree Report

The example for Decision Tree summary with trees visualization. For classification tasks additional metrics are provided:

  • confusion matrix
  • threshold (optimized in the case of binary classification task)
  • F1 score
  • Accuracy
  • Precision, Recall, MCC

Decision Tree summary

The LightGBM Report

The example for LightGBM summary:

Decision Tree summary

Available Modes

In the docs you can find details about AutoML modes are presented in the table .

Explain

automl = AutoML(mode="Explain")

It is aimed to be used when the user wants to explain and understand the data.

  • It is using 75%/25% train/test split.
  • It is using: Baseline, Linear, Decision Tree, Random Forest, Xgboost, Neural Network algorithms and ensemble.
  • It has full explanations: learning curves, importance plots, and SHAP plots.

Perform

automl = AutoML(mode="Perform")

It should be used when the user wants to train a model that will be used in real-life use cases.

  • It is using 5-fold CV.
  • It is using: Linear, Random Forest, LightGBM, Xgboost, CatBoost and Neural Network. It uses ensembling.
  • It has learning curves and importance plots in reports.

Compete

automl = AutoML(mode="Compete")

It should be used for machine learning competitions.

  • It adapts the validation strategy depending on dataset size and total_time_limit. It can be: train/test split (80/20), 5-fold CV or 10-fold CV.
  • It is using: Linear, Decision Tree, Random Forest, Extra Trees, LightGBM, Xgboost, CatBoost, Neural Network and Nearest Neighbors. It uses ensemble and stacking.
  • It has only learning curves in the reports.

Optuna

automl = AutoML(mode="Optuna", optuna_time_budget=3600)

It should be used when the performance is the most important and time is not limited.

  • It is using 10-fold CV
  • It is using: Random Forest, Extra Trees, LightGBM, Xgboost, and CatBoost. Those algorithms are tuned by Optuna framework for optuna_time_budget seconds, each. Algorithms are tuned with original data, without advanced feature engineering.
  • It is using advanced feature engineering, stacking and ensembling. The hyperparameters found for original data are reused with those steps.
  • It produces learning curves in the reports.

How to save and load AutoML?

All models in the AutoML are saved and loaded automatically. No need to call save() or load().

Example:

Train AutoML

automl = AutoML(results_path="AutoML_classifier")
automl.fit(X, y)

You will have all models saved in the AutoML_classifier directory. Each model will have a separate directory with the README.md file with all details from the training.

Compute predictions

automl = AutoML(results_path="AutoML_classifier")
automl.predict(X)

The AutoML automatically loads models from the results_path directory. If you will call fit() on already trained AutoML then you will get a warning message that AutoML is already fitted.

Why do you automatically save all models?

All models are automatically saved to be able to restore the training after interruption. For example, you are training AutoML for 48 hours, and after 47 hours there is some unexpected interruption. In MLJAR AutoML you just call the same training code after the interruption and AutoML reloads already trained models and finish the training.

Supported evaluation metrics (eval_metric argument in AutoML())

  • for binary classification: logloss, auc, f1, average_precision, accuracy- default is logloss
  • for mutliclass classification: logloss, f1, accuracy - default is logloss
  • for regression: rmse, mse, mae, r2, mape, spearman, pearson - default is rmse

If you don't find eval_metric that you need, please add a new issue. We will add it.

Examples

👉 Multi-Class Classification Example

There is a simple interface available with fit and predict methods.

import pandas as pd
from sklearn.model_selection import train_test_split
from supervised.automl import AutoML

df = pd.read_csv(
    "https://raw.githubusercontent.com/pplonski/datasets-for-start/master/adult/data.csv",
    skipinitialspace=True,
)
X_train, X_test, y_train, y_test = train_test_split(
    df[df.columns[:-1]], df["income"], test_size=0.25
)

automl = AutoML()
automl.fit(X_train, y_train)

predictions = automl.predict(X_test)

AutoML fit will print:

Create directory AutoML_1
AutoML task to be solved: binary_classification
AutoML will use algorithms: ['Baseline', 'Linear', 'Decision Tree', 'Random Forest', 'Xgboost', 'Neural Network']
AutoML will optimize for metric: logloss
1_Baseline final logloss 0.5519845471086654 time 0.08 seconds
2_DecisionTree final logloss 0.3655910192804364 time 10.28 seconds
3_Linear final logloss 0.38139916864708445 time 3.19 seconds
4_Default_RandomForest final logloss 0.2975204390214936 time 79.19 seconds
5_Default_Xgboost final logloss 0.2731086827200411 time 5.17 seconds
6_Default_NeuralNetwork final logloss 0.319812276905242 time 21.19 seconds
Ensemble final logloss 0.2731086821194617 time 1.43 seconds
  • the AutoML results in Markdown report
  • the Xgboost Markdown report, please take a look at amazing dependence plots produced by SHAP package :sparkling_heart:
  • the Decision Tree Markdown report, please take a look at beautiful tree visualization :sparkles:
  • the Logistic Regression Markdown report, please take a look at coefficients table, and you can compare the SHAP plots between (Xgboost, Decision Tree and Logistic Regression) :coffee:

👉 Multi-Class Classification Example

The example code for classification of the optical recognition of handwritten digits dataset. Running this code in less than 30 minutes will result in test accuracy ~98%.

import pandas as pd 
# scikit learn utilites
from sklearn.datasets import load_digits
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# mljar-supervised package
from supervised.automl import AutoML

# load the data
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(
    pd.DataFrame(digits.data), digits.target, stratify=digits.target, test_size=0.25,
    random_state=123
)

# train models with AutoML
automl = AutoML(mode="Perform")
automl.fit(X_train, y_train)

# compute the accuracy on test data
predictions = automl.predict_all(X_test)
print(predictions.head())
print("Test accuracy:", accuracy_score(y_test, predictions["label"].astype(int)))

👉 Regression Example

Regression example on California Housing house prices data.

import numpy as np
import pandas as pd
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from supervised.automl import AutoML # mljar-supervised

# Load the data
housing = fetch_california_housing()
X_train, X_test, y_train, y_test = train_test_split(
    pd.DataFrame(housing.data, columns=housing.feature_names),
    housing.target,
    test_size=0.25,
    random_state=123,
)

# train models with AutoML
automl = AutoML(mode="Explain")
automl.fit(X_train, y_train)

# compute the MSE on test data
predictions = automl.predict(X_test)
print("Test MSE:", mean_squared_error(y_test, predictions))

👉 More Examples

FAQ

What method is used for hyperparameters optimization?- For modes: `Explain`, `Perform` and `Compete` there is used a random search method combined with hill climbing. In this approach all checked models are saved and used for building Ensemble. - For mode: `Optuna` the Optuna framework is used. It is using TPE sampler for tuning. Models checked during Optuna hyperparameters search are not saved, only the best model is saved (final model from tuning). You can check the details about checked hyperparameters from optuna by checking study files in `optuna` directory in your AutoML `results_path`.How to save and load AutoML?

The save and load of AutoML models is automatic. All models created during AutoML training are saved in the directory set in results_path (argument of AutoML() constructor). If there is no results_path set, then the directory is created based on following name convention: AutoML_{number} the number will be number from 1 to 1000 (depends which directory name will be free).

Example save and load:

automl = AutoML(results_path='AutoML_1')
automl.fit(X, y)

The all models from AutoML are saved in AutoML_1 directory.

To load models:

automl = AutoML(results_path='AutoML_1')
automl.predict(X)

How to set ML task (select between classification or regression)?

The MLJAR AutoML can work with:

  • binary classification
  • multi-class classification
  • regression

The ML task detection is automatic based on target values. There can be situation if you want to manually force AutoML to select the ML task, then you need to set ml_task parameter. It can be set to 'binary_classification', 'multiclass_classification', 'regression'.

Example:

automl = AutoML(ml_task='regression')
automl.fit(X, y)

In the above example the regression model will be fitted.

How to reuse Optuna hyperparameters?

You can reuse Optuna hyperparameters that were found in other AutoML training. You need to pass them in optuna_init_params argument. All hyperparameters found during Optuna tuning are saved in the optuna/optuna.json file (inside results_path directory).

Example:

optuna_init = json.loads(open('previous_AutoML_training/optuna/optuna.json').read())

automl = AutoML(
    mode='Optuna',
    optuna_init_params=optuna_init
)
automl.fit(X, y)

When reusing Optuna hyperparameters the Optuna tuning is simply skipped. The model will be trained with hyperparameters set in optuna_init_params. Right now there is no option to continue Optuna tuning with seed parameters.

How to know the order of classes for binary or multiclass problem when using predict_proba?

To get predicted probabilites with information about class label please use the predict_all() method. It returns the pandas DataFrame with class names in the columns. The order of predicted columns is the same in the predict_proba() and predict_all() methods. The predict_all() method will additionaly have the column with the predicted class label.

Documentation

For details please check mljar-supervised docs.

Installation

From PyPi repository:

pip install mljar-supervised

To install this package with conda run:

conda install -c conda-forge mljar-supervised

From source code:

git clone https://github.com/mljar/mljar-supervised.git
cd mljar-supervised
python setup.py install

Installation for development

git clone https://github.com/mljar/mljar-supervised.git
virtualenv venv --python=python3.6
source venv/bin/activate
pip install -r requirements.txt
pip install -r requirements_dev.txt

Running in the docker:

FROM python:3.7-slim-buster
RUN apt-get update && apt-get -y update
RUN apt-get install -y build-essential python3-pip python3-dev
RUN pip3 -q install pip --upgrade
RUN pip3 install mljar-supervised jupyter
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root"]

Install from GitHub with pip:

pip install -q -U git+https://github.com/mljar/mljar-supervised.git@master

Demo

In the below demo GIF you will see:

  • MLJAR AutoML trained in Jupyter Notebook on titanic dataset
  • overview of created files
  • showcase of selected plots created during AutoML training
  • algorithm comparison report along with their plots
  • example of README file and csv file with results

Cite

Would you like to cite MLJAR? Great! :)

You can cite MLJAR as following:

@misc{mljar,
  author    = {Aleksandra P\l{}o\'{n}ska and Piotr P\l{}o\'{n}ski},
  year      = {2021},
  publisher = {MLJAR},
  address   = {\L{}apy, Poland},
  title     = {MLJAR: State-of-the-art Automated Machine Learning Framework for Tabular Data.  Version 0.10.3},
  url       = {https://github.com/mljar/mljar-supervised}
}

Would love to hear from you how have you used MLJAR AutoML in your project. Please feel free to let us know at image

Commercial support

Looking for commercial support? Do you need new feature implementation? Please contact us by email for details.

MLJAR

The mljar-supervised is an open-source project created by MLJAR. We care about ease of use in the Machine Learning. The mljar.com provides a beautiful and simple user interface for building machine learning models.


Documentation: https://supervised.mljar.com/

Source Code: https://github.com/mljar/mljar-supervised

Looking for commercial support: Please contact us by email for details


Download Details:

Author: Mljar
Source Code: https://github.com/mljar/mljar-supervised 
License: MIT license

#machinelearning #datascience #neuralnetwork #python 

Python package for AutoML on Tabular Data with Feature Engineering
Royce  Reinger

Royce Reinger

1675068865

Yolov3-tf2: YoloV3 Implemented in Tensorflow 2.0

YoloV3 Implemented in TensorFlow 2.0

This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices.

Key Features

  •  TensorFlow 2.0
  •  yolov3 with pre-trained Weights
  •  yolov3-tiny with pre-trained Weights
  •  Inference example
  •  Transfer learning example
  •  Eager mode training with tf.GradientTape
  •  Graph mode training with model.fit
  •  Functional model with tf.keras.layers
  •  Input pipeline using tf.data
  •  Tensorflow Serving
  •  Vectorized transformations
  •  GPU accelerated
  •  Fully integrated with absl-py from abseil.io
  •  Clean implementation
  •  Following the best practices
  •  MIT License

demo demo

Usage

Installation

Conda (Recommended)

# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate yolov3-tf2-cpu

# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate yolov3-tf2-gpu

Pip

pip install -r requirements.txt

Nvidia Driver (For GPU)

# Ubuntu 18.04
sudo apt-add-repository -r ppa:graphics-drivers/ppa
sudo apt install nvidia-driver-430
# Windows/Other
https://www.nvidia.com/Download/index.aspx

Convert pre-trained Darknet weights

# yolov3
wget https://pjreddie.com/media/files/yolov3.weights -O data/yolov3.weights
python convert.py --weights ./data/yolov3.weights --output ./checkpoints/yolov3.tf

# yolov3-tiny
wget https://pjreddie.com/media/files/yolov3-tiny.weights -O data/yolov3-tiny.weights
python convert.py --weights ./data/yolov3-tiny.weights --output ./checkpoints/yolov3-tiny.tf --tiny

Detection

# yolov3
python detect.py --image ./data/meme.jpg

# yolov3-tiny
python detect.py --weights ./checkpoints/yolov3-tiny.tf --tiny --image ./data/street.jpg

# webcam
python detect_video.py --video 0

# video file
python detect_video.py --video path_to_file.mp4 --weights ./checkpoints/yolov3-tiny.tf --tiny

# video file with output
python detect_video.py --video path_to_file.mp4 --output ./output.avi

Training

I have created a complete tutorial on how to train from scratch using the VOC2012 Dataset. See the documentation here https://github.com/zzh8829/yolov3-tf2/blob/master/docs/training_voc.md

For customzied training, you need to generate tfrecord following the TensorFlow Object Detection API. For example you can use Microsoft VOTT to generate such dataset. You can also use this script to create the pascal voc dataset.

Example commend line arguments for training

python train.py --batch_size 8 --dataset ~/Data/voc2012.tfrecord --val_dataset ~/Data/voc2012_val.tfrecord --epochs 100 --mode eager_tf --transfer fine_tune

python train.py --batch_size 8 --dataset ~/Data/voc2012.tfrecord --val_dataset ~/Data/voc2012_val.tfrecord --epochs 100 --mode fit --transfer none

python train.py --batch_size 8 --dataset ~/Data/voc2012.tfrecord --val_dataset ~/Data/voc2012_val.tfrecord --epochs 100 --mode fit --transfer no_output

python train.py --batch_size 8 --dataset ~/Data/voc2012.tfrecord --val_dataset ~/Data/voc2012_val.tfrecord --epochs 10 --mode eager_fit --transfer fine_tune --weights ./checkpoints/yolov3-tiny.tf --tiny

Tensorflow Serving

You can export the model to tf serving

python export_tfserving.py --output serving/yolov3/1/
# verify tfserving graph
saved_model_cli show --dir serving/yolov3/1/ --tag_set serve --signature_def serving_default

The inputs are preprocessed images (see dataset.transform_iamges)

outputs are

yolo_nms_0: bounding boxes
yolo_nms_1: scores
yolo_nms_2: classes
yolo_nms_3: numbers of valid detections

Benchmark (No Training Yet)

Numbers are obtained with rough calculations from detect_video.py

Macbook Pro 13 (2.7GHz i5)

Detection416x416320x320608x608
YoloV31000ms500ms1546ms
YoloV3-Tiny100ms58ms208ms

Desktop PC (GTX 970)

Detection416x416320x320608x608
YoloV374ms57ms129ms
YoloV3-Tiny18ms15ms28ms

AWS g3.4xlarge (Tesla M60)

Detection416x416320x320608x608
YoloV366ms50ms123ms
YoloV3-Tiny15ms10ms24ms

RTX 2070 (credit to @AnaRhisT94)

Detection416x416
YoloV3 predict_on_batch29-32ms
YoloV3 predict_on_batch + TensorRT22-28ms

Darknet version of YoloV3 at 416x416 takes 29ms on Titan X. Considering Titan X has about double the benchmark of Tesla M60, Performance-wise this implementation is pretty comparable.

Implementation Details

Eager execution

Great addition for existing TensorFlow experts. Not very easy to use without some intermediate understanding of TensorFlow graphs. It is annoying when you accidentally use incompatible features like tensor.shape[0] or some sort of python control flow that works fine in eager mode, but totally breaks down when you try to compile the model to graph.

model(x) vs. model.predict(x)

When calling model(x) directly, we are executing the graph in eager mode. For model.predict, tf actually compiles the graph on the first run and then execute in graph mode. So if you are only running the model once, model(x) is faster since there is no compilation needed. Otherwise, model.predict or using exported SavedModel graph is much faster (by 2x). For non real-time usage, model.predict_on_batch is even faster as tested by @AnaRhisT94)

GradientTape

Extremely useful for debugging purpose, you can set breakpoints anywhere. You can compile all the keras fitting functionalities with gradient tape using the run_eagerly argument in model.compile. From my limited testing, all training methods including GradientTape, keras.fit, eager or not yeilds similar performance. But graph mode is still preferred since it's a tiny bit more efficient.

@tf.function

@tf.function is very cool. It's like an in-between version of eager and graph. You can step through the function by disabling tf.function and then gain performance when you enable it in production. Important note, you should not pass any non-tensor parameter to @tf.function, it will cause re-compilation on every call. I am not sure whats the best way other than using globals.

absl.py (abseil)

Absolutely amazing. If you don't know already, absl.py is officially used by internal projects at Google. It standardizes application interface for Python and many other languages. After using it within Google, I was so excited to hear abseil going open source. It includes many decades of best practices learned from creating large size scalable applications. I literally have nothing bad to say about it, strongly recommend absl.py to everybody.

Loading pre-trained Darknet weights

very hard with pure functional API because the layer ordering is different in tf.keras and darknet. The clean solution here is creating sub-models in keras. Keras is not able to save nested model in h5 format properly, TF Checkpoint is recommended since its offically supported by TensorFlow.

tf.keras.layers.BatchNormalization

It doesn't work very well for transfer learning. There are many articles and github issues all over the internet. I used a simple hack to make it work nicer on transfer learning with small batches.

What is the output of transform_targets ???

I know it's very confusion but the output is tuple of shape

(
  [N, 13, 13, 3, 6],
  [N, 26, 26, 3, 6],
  [N, 52, 52, 3, 6]
)

where N is the number of labels in batch and the last dimension "6" represents [x, y, w, h, obj, class] of the bounding boxes.

IOU and Score Threshold

the default threshold is 0.5 for both IOU and score, you can adjust them according to your need by setting --yolo_iou_threshold and --yolo_score_threshold flags

Maximum number of boxes

By default there can be maximum 100 bounding boxes per image, if for some reason you would like to have more boxes you can use the --yolo_max_boxes flag.

NAN Loss / Training Failed / Doesn't Converge

Many people including me have succeeded in training, so the code definitely works @LongxingTan in https://github.com/zzh8829/yolov3-tf2/issues/128 provided some of his insights summarized here:

  1. For nan loss, try to make learning rate smaller
  2. Double check the format of your input data. Data input labelled by vott and labelImg is different. so make sure the input box is the right, and check carefully the format is x1/width,y1/height,x2/width,y2/height and NOT x1,y1,x2,y2, or x,y,w,h

Make sure to visualize your custom dataset using this tool

python tools/visualize_dataset.py --classes=./data/voc2012.names

It will output one random image from your dataset with label to output.jpg Training definitely won't work if the rendered label doesn't look correct

Command Line Args Reference

convert.py:
  --output: path to output
    (default: './checkpoints/yolov3.tf')
  --[no]tiny: yolov3 or yolov3-tiny
    (default: 'false')
  --weights: path to weights file
    (default: './data/yolov3.weights')
  --num_classes: number of classes in the model
    (default: '80')
    (an integer)

detect.py:
  --classes: path to classes file
    (default: './data/coco.names')
  --image: path to input image
    (default: './data/girl.png')
  --output: path to output image
    (default: './output.jpg')
  --[no]tiny: yolov3 or yolov3-tiny
    (default: 'false')
  --weights: path to weights file
    (default: './checkpoints/yolov3.tf')
  --num_classes: number of classes in the model
    (default: '80')
    (an integer)

detect_video.py:
  --classes: path to classes file
    (default: './data/coco.names')
  --video: path to input video (use 0 for cam)
    (default: './data/video.mp4')
  --output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)
    (default: None)
  --output_format: codec used in VideoWriter when saving video to file
    (default: 'XVID)
  --[no]tiny: yolov3 or yolov3-tiny
    (default: 'false')
  --weights: path to weights file
    (default: './checkpoints/yolov3.tf')
  --num_classes: number of classes in the model
    (default: '80')
    (an integer)

train.py:
  --batch_size: batch size
    (default: '8')
    (an integer)
  --classes: path to classes file
    (default: './data/coco.names')
  --dataset: path to dataset
    (default: '')
  --epochs: number of epochs
    (default: '2')
    (an integer)
  --learning_rate: learning rate
    (default: '0.001')
    (a number)
  --mode: <fit|eager_fit|eager_tf>: fit: model.fit, eager_fit: model.fit(run_eagerly=True), eager_tf: custom GradientTape
    (default: 'fit')
  --num_classes: number of classes in the model
    (default: '80')
    (an integer)
  --size: image size
    (default: '416')
    (an integer)
  --[no]tiny: yolov3 or yolov3-tiny
    (default: 'false')
  --transfer: <none|darknet|no_output|frozen|fine_tune>: none: Training from scratch, darknet: Transfer darknet, no_output: Transfer all but output, frozen: Transfer and freeze all,
    fine_tune: Transfer all and freeze darknet only
    (default: 'none')
  --val_dataset: path to validation dataset
    (default: '')
  --weights: path to weights file
    (default: './checkpoints/yolov3.tf')

Change Log

October 1, 2019

  • Updated to Tensorflow to v2.0.0 Release

References

It is pretty much impossible to implement this from the yolov3 paper alone. I had to reference the official (very hard to understand) and many un-official (many minor errors) repos to piece together the complete picture.

Download Details:

Author: zzh8829
Source Code: https://github.com/zzh8829/yolov3-tf2 
License: MIT license

#machinelearning #deeplearning #neuralnetwork #tensorflow 

Yolov3-tf2: YoloV3 Implemented in Tensorflow 2.0
Royce  Reinger

Royce Reinger

1674089220

ML-glossary: Machine Learning Glossary

Machine Learning Glossary

Looking for fellow maintainers!

Apologies for my non-responsiveness. :( I've been heads down at Cruise, buiding ML infra for self-driving cars, and haven't reviewed this repo in forever. Looks like we're getting 54k monthly active users now and I think the repo deserves more attention. Let me know if you would be interested in joining as a maintainer with priviledges to merge PRs.

How To Contribute

Clone Repo

git clone https://github.com/bfortuner/ml-glossary.git

Install Dependencies

# Assumes you have the usual suspects installed: numpy, scipy, etc..
pip install sphinx sphinx-autobuild
pip install sphinx_rtd_theme
pip install recommonmark

For python-3.x installed, use:

pip3 install sphinx sphinx-autobuild
pip3 install sphinx_rtd_theme
pip3 install recommonmark

Preview Changes

If you are using make build.

cd ml-glossary
cd docs
make html

For Windows.

cd ml-glossary
cd docs
build.bat html

Verify your changes by opening the index.html file in _build/

Submit Pull Request

Short for time?

Feel free to raise an issue to correct errors or contribute content without a pull request.

Style Guide

Each entry in the glossary MUST include the following at a minimum:

  1. Concise explanation - as short as possible, but no shorter
  2. Citations - Papers, Tutorials, etc.

Excellent entries will also include:

  1. Visuals - diagrams, charts, animations, images
  2. Code - python/numpy snippets, classes, or functions
  3. Equations - Formatted with Latex

The goal of the glossary is to present content in the most accessible way possible, with a heavy emphasis on visuals and interactive diagrams. That said, in the spirit of rapid prototyping, it's okay to to submit a "rough draft" without visuals or code. We expect other readers will enhance your submission over time.

Why RST and not Markdown?

RST has more features. For large and complex documentation projects, it's the logical choice.

Top Contributors

We're big fans of Distill and we like their idea of offering prizes for high-quality submissions. We don't have as much money as they do, but we'd still like to reward contributors in some way for contributing to the glossary. For instance a cheatsheet cryptocurreny where tokens equal commits ;). Let us know if you have better ideas. In the end, this is an open-source project and we hope contributing to a repository of concise, accessible, machine learning knowledge is enough incentive on its own!

Tips and Tricks

Resources

View The Glossary

Download Details:

Author: Bfortuner
Source Code: https://github.com/bfortuner/ml-glossary 
License: MIT license

#datascience #machinelearning #deeplearning #neuralnetwork 

ML-glossary: Machine Learning Glossary
Royce  Reinger

Royce Reinger

1674076980

Imgclsmob: Sandbox for Training Deep Learning Networks

Deep learning networks

This repo is used to research convolutional networks primarily for computer vision tasks. For this purpose, the repo contains (re)implementations of various classification, segmentation, detection, and pose estimation models and scripts for training/evaluating/converting.

The following frameworks are used:

For each supported framework, there is a PIP-package containing pure models without auxiliary scripts. List of packages:

Currently, models are mostly implemented on Gluon and then ported to other frameworks. Some models are pretrained on ImageNet-1K, CIFAR-10/100, SVHN, CUB-200-2011, Pascal VOC2012, ADE20K, Cityscapes, and COCO datasets. All pretrained weights are loaded automatically during use. See examples of such automatic loading of weights in the corresponding sections of the documentation dedicated to a particular package:

Installation

To use training/evaluating scripts as well as all models, you need to clone the repository and install dependencies:

git clone git@github.com:osmr/imgclsmob.git
pip install -r requirements.txt

Table of implemented classification models

Some remarks:

  • Repo is an author repository, if it exists.
  • a, b, c, d, and e means the implementation of a model for ImageNet-1K, CIFAR-10, CIFAR-100, SVHN, and CUB-200-2011, respectively.
  • A, B, C, D, and E means having a pre-trained model for corresponding datasets.
ModelGluonPyTorchChainerKerasTFTF2PaperRepoYear
AlexNetAAAAAAlinklink2012
ZFNetAAAAAAlink-2013
VGGAAAAAAlink-2014
BN-VGGAAAAAAlink-2015
BN-InceptionAAA--Alink-2015
ResNetABCDEABCDEABCDEAAABCDElinklink2015
PreResNetABCDABCDABCDAAABCDlinklink2016
ResNeXtABCDABCDABCDAAABCDlinklink2016
SENetAAAAAAlinklink2017
SE-ResNetABCDEABCDEABCDEAAABCDElinklink2017
SE-PreResNetABCDABCDABCDAAABCDlinklink2017
SE-ResNeXtAAAAAAlinklink2017
ResNeSt(A)AAA--Alinklink2020
IBN-ResNetAA---Alinklink2018
IBN-ResNeXtAA---Alinklink2018
IBN-DenseNetAA---Alinklink2018
AirNetAAA--Alinklink2018
AirNeXtAAA--Alinklink2018
BAM-ResNetAAA--Alinklink2018
CBAM-ResNetAAA--Alinklink2018
ResAttNetaaa---linklink2017
SKNetaaa---linklink2019
SCNetAAA--Alinklink2020
RegNetAAA--Alinklink2020
DIA-ResNetaBCDaBCDaBCD---linklink2019
DIA-PreResNetaBCDaBCDaBCD---linklink2019
PyramidNetABCDABCDABCD--ABCDlinklink2016
DiracNetV2AAA--Alinklink2017
ShaResNetaaa---linklink2017
CRU-NetA-----linklink2018
DenseNetABCDABCDABCDAAABCDlinklink2016
CondenseNetAAA---linklink2017
SparseNetaaa---linklink2018
PeleeNetAAA--Alinklink2018
Oct-ResNetabcdaa---link-2019
Res2Neta-----link-2019
WRNABCDABCDABCD--alinklink2016
WRN-1bitBCDBCDBCD---linklink2018
DRN-CAAA--Alinklink2017
DRN-DAAA--Alinklink2017
DPNAAA--Alinklink2017
DarkNet RefAAAAAAlinklink-
DarkNet TinyAAAAAAlinklink-
DarkNet-19aaaaaalinklink-
DarkNet-53AAAAAAlinklink2018
ChannelNetaaa-a-linklink2018
iSQRT-COV-ResNetaa----linklink2017
RevNet-a----linklink2017
i-RevNetAAA---linklink2018
BagNetAAA--Alinklink2019
DLAAAA--Alinklink2017
MSDNetaab----linklink2017
FishNetAAA---linklink2018
ESPNetv2AAA---linklink2018
DiCENetAAA--Alinklink2019
HRNetAAA--Alinklink2019
VoVNetAAA--Alinklink2019
SelecSLSAAA--Alinklink2019
HarDNetAAA--Alinklink2019
X-DenseNetaBCDaBCDaBCD---linklink2017
SqueezeNetAAAAAAlinklink2016
SqueezeResNetAAAAAAlink-2016
SqueezeNextAAAAAAlinklink2018
ShuffleNetAAAAAAlink-2017
ShuffleNetV2AAAAAAlink-2018
MENetAAAAAAlinklink2018
MobileNetAEAEAEAAAElinklink2017
FD-MobileNetAAAAAAlinklink2018
MobileNetV2AAAAAAlinklink2018
MobileNetV3AAAA-Alinklink2019
IGCV3AAAAAAlinklink2018
GhostNetaaa--alinklink2019
MnasNetAAAAAAlink-2018
DARTSAAA---linklink2018
ProxylessNASAEAEAE--AElinklink2018
FBNet-CAAA--Alink-2018
XceptionAAA--Alinklink2016
InceptionV3AAA--Alinklink2015
InceptionV4AAA--Alinklink2016
InceptionResNetV1AAA--Alinklink2016
InceptionResNetV2AAA--Alinklink2016
PolyNetAAA--Alinklink2016
NASNet-LargeAAA--Alinklink2017
NASNet-MobileAAA--Alinklink2017
PNASNet-LargeAAA--Alinklink2017
SPNASNetAAA--Alinklink2019
EfficientNetAAAA-Alinklink2019
MixNetAAA--Alinklink2019
NINBCDBCDBCD---linklink2013
RoR-3BCDBCDBCD---link-2016
RiRBCDBCDBCD---link-2016
ResDrop-ResNetbcdbcdbcd---linklink2016
Shake-Shake-ResNetBCDBCDBCD---linklink2017
ShakeDrop-ResNetbcdbcdbcd---link-2018
FractalNetbcbc----linklink2016
NTS-NetEEE---linklink2018

Table of implemented segmentation models

Some remarks:

  • a/A corresponds to Pascal VOC2012.
  • b/B corresponds to ADE20K.
  • c/C corresponds to Cityscapes.
  • d/D corresponds to COCO.
  • e/E corresponds to CelebAMask-HQ.
ModelGluonPyTorchChainerKerasTFTF2PaperRepoYear
PSPNetABCDABCDABCD--ABCDlink-2016
DeepLabv3ABcDABcDABcD--ABcDlink-2017
FCN-8s(d)ABcDABcDABcD--ABcDlink-2014
ICNetCCC--Clinklink2017
SINetCCC--clinklink2019
BiSeNeteee--elink-2018
DANetCCC--Clinklink2018
Fast-SCNNCCC--Clink-2019
CGNetccc--clinklink2018
DABNetccc--clinklink2019
FPENetccc--clink-2019
ContextNet-c----link-2018
LEDNetccc--clink-2019
ESNet-c----link-2019
EDANet-c----linklink2018
ENet-c----link-2016
ERFNet-c----link-2017
LinkNet-c----link-2017
SegNet-c----link-2015
U-Net-c----link-2015
SQNet-c----link-2016

Table of implemented object detection models

Some remarks:

  • a/A corresponds to COCO.
ModelGluonPyTorchChainerKerasTFTF2PaperRepoYear
CenterNetaaa--alinklink2019

Table of implemented human pose estimation models

Some remarks:

  • a/A corresponds to COCO.
ModelGluonPyTorchChainerKerasTFTF2PaperRepoYear
AlphaPoseAAA--Alinklink2016
SimplePoseAAA--Alinklink2018
SimplePose(Mobile)AAA--Alink-2018
Lightweight OpenPoseAAA--Alinklink2018
IBPPoseAAA--Alinklink2019

Table of implemented automatic speech recognition models

Some remarks:

  • a/A corresponds to LibriSpeech.
  • b/B corresponds to Mozilla Common Voice.
ModelGluonPyTorchChainerKerasTFTF2PaperRepoYear
Jasper DRABABab--ablinklink2019
QuartzNetABABab--ablinklink2019

Download Details:

Author: osmr
Source Code: https://github.com/osmr/imgclsmob 
License: MIT license

#machinelearning #deeplearning #neuralnetwork #mxnet #chainer #tensorflow 

Imgclsmob: Sandbox for Training Deep Learning Networks
Royce  Reinger

Royce Reinger

1674028740

Composer: Train Neural Networks Up to 7x Faster

Composer

A PyTorch Library for Efficient Neural Network Training

Train Faster, Reduce Cost, Get Better Models
 

👋 Welcome

Composer is a PyTorch library that enables you to train neural networks faster, at lower cost, and to higher accuracy. We've implemented more than two dozen speedup methods that can be applied to your training loop in just a few lines of code, or used with our built-in Trainer. We continually integrate the latest state-of-the-art in efficient neural network training.

Composer features:

  • 20+ methods for speeding up training networks for computer vision and natural language. Don't waste hours trying to reproduce research papers when Composer has done the work for you.
  • An easy-to-use trainer that has been written to be as performant as possible and integrates best practices for efficient, multi-GPU training.
  • Functional forms of all of our speedup methods that allow you to integrate them into your existing training loop.
  • Strong, reproducible baselines to get you started as quickly as possible.

Benefits

    

With no additional tuning, you can apply our methods to:

  • Train ResNet-50 on ImageNet to the standard 76.6% top-one accuracy for $15 in 27 minutes (with vanilla PyTorch: $116 in 3.5 hours) on AWS.
  • Train GPT-2 125M to the standard perplexity of 24.11 for $145 in 4.5 hours (with vanilla PyTorch: $255 in 7.8 hours) on AWS.
  • Train DeepLab-v3 on ADE20k to the standard mean IOU of 45.7 for $36 in 1.1 hours (with vanilla PyTorch: $110 in 3.5 hours) on AWS.

🚀 Quickstart

💾 Installation

Composer is available with Pip:

pip install mosaicml

Alternatively, install Composer with Conda:

conda install -c mosaicml mosaicml

🚌 Usage

You can use Composer's speedup methods in two ways:

  • Through a standalone Functional API (similar to torch.nn.functional) that allows you to integrate them into your existing training code.
  • Using Composer's built-in Trainer, which is designed to be performant and automatically takes care of the details of using speedup methods.

Example: Functional API Open In Colab

Integrate our speedup methods into your training loop with just a few lines of code, and see the results. Here we easily apply BlurPool and SqueezeExcite:

import composer.functional as cf
from torchvision import models

my_model = models.resnet18()

# add blurpool and squeeze excite layers
my_model = cf.apply_blurpool(my_model)
my_model = cf.apply_squeeze_excite(my_model)

# your own training code starts here

For more examples, see the Composer Functional API Colab notebook and Functional API guide.

Example: Trainer Open In Colab

For the best experience and the most efficient possible training, we recommend using Composer's built-in trainer, which automatically takes care of the details of using speedup methods and provides useful abstractions that facilitate rapid experimentation.

    

from torch.utils.data import DataLoader
from torchvision import datasets, transforms

from composer import Trainer
from composer.algorithms import ChannelsLast, CutMix, LabelSmoothing
from composer.models import mnist_model

transform = transforms.Compose([transforms.ToTensor()])
train_dataset = datasets.MNIST("data", download=True, train=True, transform=transform)
eval_dataset = datasets.MNIST("data", download=True, train=False, transform=transform)
train_dataloader = DataLoader(train_dataset, batch_size=128)
eval_dataloader = DataLoader(eval_dataset, batch_size=128)

trainer = Trainer(
    model=mnist_model(),
    train_dataloader=train_dataloader,
    eval_dataloader=eval_dataloader,
    max_duration="1ep",
    algorithms=[
        ChannelsLast(),
        CutMix(alpha=1.0),
        LabelSmoothing(smoothing=0.1),
    ]
)
trainer.fit()

Composer's built-in trainer makes it easy to add multiple speedup methods in a single line of code! Trying out new methods or combinations of methods is as easy as changing a single list.

Here are some examples of methods available in Composer (see here for the full list):

NameAttributiontl;drExample BenchmarkSpeed Up*
AlibiPress et al, 2021Replace attention with AliBi.GPT-21.5x
BlurPoolZhang, 2019Applies an anti-aliasing filter before every downsampling operation.ResNet-1011.2x
ChannelsLastPyTorchUses channels last memory format (NHWC).ResNet-1011.5x
CutOutDeVries et al, 2017Randomly erases rectangular blocks from the image.ResNet-1011.2x
LabelSmoothingSzegedy et al, 2015Smooths the labels with a uniform priorResNet-1011.5x
MixUpZhang et al, 2017Blends pairs of examples and labels.ResNet-1011.5x
RandAugmentCubuk et al, 2020Applies a series of random augmentations to each image.ResNet-1011.3x
SAMForet et al, 2021An optimization strategy that seeks flatter minima.ResNet-1011.4x
SeqLengthWarmupLi et al, 2021Progressively increase sequence length.GPT-21.2x
Stochastic DepthHuang et al, 2016Replaces a specified layer with a stochastic version that randomly drops the layer or samples during trainingResNet-1011.1x

* = time-to-train to the same quality as the baseline.

🛠 Building Speedup Recipes

Given two methods that speed up training by 1.5x each, do they combine to provide a 2.25x (1.5x * 1.5x) speedup? Not necessarily. They may optimize the same part of the training process and lead to diminishing returns, or they may even interact in ways that prove detrimental. Determining which methods to compose together isn't as simple as assembling a set of methods that perform best individually.

We have come up with compositions of methods that work especially well together through rigorous exploration of the design space of recipes and research on the science behind composition. The MosaicML Explorer contains all of the data we have collected so far on composition, and it highlights the compositions of methods that are pareto-optimal - that provide the best possible tradeoffs between training time or cost and the quality of the trained model. Whether you want to reach the same quality faster or get better quality within your current budget, Explorer can help you decide which speedup methods to use. We update this data regularly as we add new methods and develop better recipes.

As an example, here are two performant recipes, one for ResNet-101 on ImageNet, and the other for GPT-2 on OpenWebText, on 8xA100s:

ResNet-101

NameFunctionaltl;drBenchmarkSpeed Up
Blur Poolcf.apply_blurpoolApplies an anti-aliasing filter before every downsampling operation.ResNet-1011.2x
Channels Lastcf.apply_
channels_last
Uses channels last memory format (NHWC).ResNet-1011.5x
Label Smoothingcf.smooth_labelsSmooths the labels with a uniform prior.ResNet-1011.5x
MixUpCF.mixup_batchBlends pairs of examples and labels.ResNet-1011.5x
Progressive Resizingcf.resize_batchIncreases the input image size during training.ResNet-1011.3x
SAMN/ASAM optimizer measures sharpness of optimization space.ResNet-1011.5x
CompositionN/ACheapest: $49 @ 78.1% AccResNet-1013.5x

GPT-2

NameFunctionaltl;drBenchmarkSpeed Up
Alibicf.apply_alibiReplace attention with AliBi.GPT-21.6x
Seq Length Warmupcf.set_batch_
sequence_length
Progressively increase sequence length.GPT-21.5x
CompositionN/ACheapest: $145 @ 24.11 PPLGPT-21.7x

⚙️ What benchmarks does Composer support?

We'll use the word benchmark to denote a specific model trained on a specific dataset, with model quality assessed using a specific metric.

Composer features computer vision and natural language processing benchmarks including (but not limited to):

ModelDatasetLossTaskEvaluation Metrics
Computer Vision
ResNet FamilyCIFAR-10Cross EntropyImage ClassificationClassification Accuracy
ResNet FamilyImageNetCross EntropyImage ClassificationClassification Accuracy
EfficientNet FamilyImageNetCross EntropyImage ClassificationClassification Accuracy
UNetBraTSDice LossImage SegmentationDice Coefficient
DeepLab v3ADE20KCross EntropyImage SegmentationmIoU
Natural Language Processing
BERT Family{Wikipedia & BooksCorpus, C4}Cross EntropyMasked Language ModelingGLUE
GPT Family{OpenWebText, C4}Cross EntropyLanguage Modeling
 
Perplexity

🤔 Why should I use Composer?

Speed

The compute required to train a state-of-the-art machine learning model is doubling every 6 months, putting such models further and further out of reach for most researchers and practitioners with each passing day.

Composer addresses this challenge by focusing on training efficiency: it contains cutting-edge speedup methods that modify the training algorithm to reduce the time and cost necessary to train deep learning models. When you use Composer, you can rest assured that you are training efficiently. We have combed the literature, done the science, and built industrial-grade implementations to ensure this is the case.

Flexibility

Even after these speedup methods are implemented, assembling them together into recipes is nontrivial. We designed Composer with the right abstractions for composing (and creating new) speedup methods.

Specifically, Composer uses two-way callbacks (Howard et al, 2020) to modify the entire training state at particular events in the training loop to effect speedups. We handle collisions between methods, proper method ordering, and more.

Through this, methods can modify:

  • data inputs for batches (data augmentations, sequence length warmup, skipping examples, etc.)
  • neural network architecture (pruning, model surgery, etc.)
  • loss function (label smoothing, MixUp, CutMix, etc.)
  • optimizer (Sharpness Aware Minimization)
  • training dynamics (layer freezing, selective backprop, etc.)

You can easily add your own methods or callbacks to try out your ideas or modify any part of the training loop.

Support

Composer is an active and ongoing project. We will respond quickly to issues filed in this repository.

🧐 Why shouldn’t I use Composer?

  • Composer is mostly optimized for computer vision and natural language processing. If you work on, e.g., reinforcement learning, you might encounter rough edges when using Composer.
  • Composer currently only supports NVIDIA GPUs, although we're working on adding alternatives.
  • Since Composer is still in alpha, our API may not be stable. We recommend pegging your work to a Composer version.

📚 Learn More

Here are some resources actively maintained by the Composer community to help you get started:

ResourceDetails
Getting started with our TrainerA Colab Notebook showing how to use our Trainer
Getting started with our Functional APIA Colab Notebook showing how to use our Functional API
Building Speedup MethodsA Colab Notebook showing how to build new training modifications on top of Composer
Training BERTs with Composer and 🤗A Colab Notebook showing how to train BERT models with Composer and 🤗!

If you have any questions, please feel free to reach out to us on Twitter, email, or our Community Slack!

💫 Contributors

Composer is part of the broader Machine Learning community, and we welcome any contributions, pull requests, or issues!

To start contributing, see our Contributing page.

P.S.: We're hiring!

✍️ Citation

@misc{mosaicml2022composer,
    author = {The Mosaic ML Team},
    title = {composer},
    year = {2021},
    howpublished = {\url{https://github.com/mosaicml/composer/}},
}

Download Details:

Author: Mosaicml
Source Code: https://github.com/mosaicml/composer 
License: Apache-2.0 license

#machinelearning #deeplearning #neuralnetwork #pytorch 

Composer: Train Neural Networks Up to 7x Faster
Royce  Reinger

Royce Reinger

1673771280

Igel: A Delightful ML tool That Allows You To Train, Test

igel

A delightful machine learning tool that allows you to train/fit, test and use models without writing code

Introduction

The goal of the project is to provide machine learning for everyone, both technical and non-technical users.

I needed a tool sometimes, which I can use to fast create a machine learning prototype. Whether to build some proof of concept, create a fast draft model to prove a point or use auto ML. I find myself often stuck at writing boilerplate code and thinking too much where to start. Therefore, I decided to create this tool.

igel is built on top of other ML frameworks. It provides a simple way to use machine learning without writing a single line of code. Igel is highly customizable, but only if you want to. Igel does not force you to customize anything. Besides default values, igel can use auto-ml features to figure out a model that can work great with your data.

All you need is a yaml (or json) file, where you need to describe what you are trying to do. That's it!

Igel supports regression, classification and clustering. Igel's supports auto-ml features like ImageClassification and TextClassification

Igel supports most used dataset types in the data science field. For instance, your input dataset can be a csv, txt, excel sheet, json or even html file that you want to fetch. If you are using auto-ml features, then you can even feed raw data to igel and it will figure out how to deal with it. More on this later in the examples.

Features

  • Supports most dataset types (csv, txt, excel, json, html) even just raw data stored in folders
  • Supports all state of the art machine learning models (even preview models)
  • Supports different data preprocessing methods
  • Provides flexibility and data control while writing configurations
  • Supports cross validation
  • Supports both hyperparameter search (version >= 0.2.8)
  • Supports yaml and json format
  • Usage from GUI
  • Supports different sklearn metrics for regression, classification and clustering
  • Supports multi-output/multi-target regression and classification
  • Supports multi-processing for parallel model construction
  • Support for auto machine learning

Installation

  • The easiest way is to install igel using pip
$ pip install -U igel

Models

Igel's supported models:

+--------------------+----------------------------+-------------------------+
|      regression    |        classification      |        clustering       |
+--------------------+----------------------------+-------------------------+
|   LinearRegression |         LogisticRegression |                  KMeans |
|              Lasso |                      Ridge |     AffinityPropagation |
|          LassoLars |               DecisionTree |                   Birch |
| BayesianRegression |                  ExtraTree | AgglomerativeClustering |
|    HuberRegression |               RandomForest |    FeatureAgglomeration |
|              Ridge |                 ExtraTrees |                  DBSCAN |
|  PoissonRegression |                        SVM |         MiniBatchKMeans |
|      ARDRegression |                  LinearSVM |    SpectralBiclustering |
|  TweedieRegression |                      NuSVM |    SpectralCoclustering |
| TheilSenRegression |            NearestNeighbor |      SpectralClustering |
|    GammaRegression |              NeuralNetwork |               MeanShift |
|   RANSACRegression | PassiveAgressiveClassifier |                  OPTICS |
|       DecisionTree |                 Perceptron |                KMedoids |
|          ExtraTree |               BernoulliRBM |                    ---- |
|       RandomForest |           BoltzmannMachine |                    ---- |
|         ExtraTrees |       CalibratedClassifier |                    ---- |
|                SVM |                   Adaboost |                    ---- |
|          LinearSVM |                    Bagging |                    ---- |
|              NuSVM |           GradientBoosting |                    ---- |
|    NearestNeighbor |        BernoulliNaiveBayes |                    ---- |
|      NeuralNetwork |      CategoricalNaiveBayes |                    ---- |
|         ElasticNet |       ComplementNaiveBayes |                    ---- |
|       BernoulliRBM |         GaussianNaiveBayes |                    ---- |
|   BoltzmannMachine |      MultinomialNaiveBayes |                    ---- |
|           Adaboost |                       ---- |                    ---- |
|            Bagging |                       ---- |                    ---- |
|   GradientBoosting |                       ---- |                    ---- |
+--------------------+----------------------------+-------------------------+

For auto ML:

  • ImageClassifier
  • TextClassifier
  • ImageRegressor
  • TextRegressor
  • StructeredDataClassifier
  • StructeredDataRegressor
  • AutoModel

Quick Start

The help command is very useful to check supported commands and corresponding args/options

$ igel --help

You can also run help on sub-commands, for example:

$ igel fit --help

Igel is highly customizable. If you know what you want and want to configure your model manually, then check the next sections, which will guide you on how to write a yaml or a json config file. After that, you just have to tell igel, what to do and where to find your data and config file. Here is an example:

$ igel fit --data_path 'path_to_your_csv_dataset.csv' --yaml_path 'path_to_your_yaml_file.yaml'

However, you can also use the auto-ml features and let igel do everything for you. A great example for this would be image classification. Let's imagine you already have a dataset of raw images stored in a folder called images

All you have to do is run:

$ igel auto-train --data_path 'path_to_your_images_folder' --task ImageClassification

That's it! Igel will read the images from the directory, process the dataset (converting to matrices, rescale, split, etc...) and start training/optimizing a model that works good on your data. As you can see it's pretty easy, you just have to provide the path to your data and the task you want to perform.

Note

This feature is computationally expensive as igel would try many different models and compare their performance in order to find the 'best' one.

Usage

You can run the help command to get instructions. You can also run help on sub-commands!

$ igel --help

Configuration Step

First step is to provide a yaml file (you can also use json if you want)

You can do this manually by creating a .yaml file (called igel.yaml by convention but you can name if whatever you want) and editing it yourself. However, if you are lazy (and you probably are, like me :D), you can use the igel init command to get started fast, which will create a basic config file for you on the fly.

"""
igel init --help


Example:
If I want to use neural networks to classify whether someone is sick or not using the indian-diabetes dataset,
then I would use this command to initialize a yaml file n.b. you may need to rename outcome column in .csv to sick:

$ igel init -type "classification" -model "NeuralNetwork" -target "sick"
"""
$ igel init

After running the command, an igel.yaml file will be created for you in the current working directory. You can check it out and modify it if you want to, otherwise you can also create everything from scratch.

  • Demo:

../assets/igel-init.gif


# model definition
model:
    # in the type field, you can write the type of problem you want to solve. Whether regression, classification or clustering
    # Then, provide the algorithm you want to use on the data. Here I'm using the random forest algorithm
    type: classification
    algorithm: RandomForest     # make sure you write the name of the algorithm in pascal case
    arguments:
        n_estimators: 100   # here, I set the number of estimators (or trees) to 100
        max_depth: 30       # set the max_depth of the tree

# target you want to predict
# Here, as an example, I'm using the famous indians-diabetes dataset, where I want to predict whether someone have diabetes or not.
# Depending on your data, you need to provide the target(s) you want to predict here
target:
    - sick

In the example above, I'm using random forest to classify whether someone have diabetes or not depending on some features in the dataset I used the famous indian diabetes in this example indian-diabetes dataset)

Notice that I passed n_estimators and max_depth as additional arguments to the model. If you don't provide arguments then the default will be used. You don't have to memorize the arguments for each model. You can always run igel models in your terminal, which will get you to interactive mode, where you will be prompted to enter the model you want to use and type of the problem you want to solve. Igel will then show you information about the model and a link that you can follow to see a list of available arguments and how to use these.

Training

  • The expected way to use igel is from terminal (igel CLI):

Run this command in terminal to fit/train a model, where you provide the path to your dataset and the path to the yaml file

$ igel fit --data_path 'path_to_your_csv_dataset.csv' --yaml_path 'path_to_your_yaml_file.yaml'

# or shorter

$ igel fit -dp 'path_to_your_csv_dataset.csv' -yml 'path_to_your_yaml_file.yaml'

"""
That's it. Your "trained" model can be now found in the model_results folder
(automatically created for you in your current working directory).
Furthermore, a description can be found in the description.json file inside the model_results folder.
"""
  • Demo:

../assets/igel-fit.gif


Evaluation

You can then evaluate the trained/pre-fitted model:

$ igel evaluate -dp 'path_to_your_evaluation_dataset.csv'
"""
This will automatically generate an evaluation.json file in the current directory, where all evaluation results are stored
"""
  • Demo:

../assets/igel-eval.gif


Prediction

Finally, you can use the trained/pre-fitted model to make predictions if you are happy with the evaluation results:

$ igel predict -dp 'path_to_your_test_dataset.csv'
"""
This will generate a predictions.csv file in your current directory, where all predictions are stored in a csv file
"""
  • Demo:

../assets/igel-pred.gif

../assets/igel-predict.gif


Experiment

You can combine the train, evaluate and predict phases using one single command called experiment:

$ igel experiment -DP "path_to_train_data path_to_eval_data path_to_test_data" -yml "path_to_yaml_file"

"""
This will run fit using train_data, evaluate using eval_data and further generate predictions using the test_data
"""
  • Demo:

../assets/igel-experiment.gif


Export

You can export the trained/pre-fitted sklearn model into ONNX:

$ igel export -dp "path_to_pre-fitted_sklearn_model"

"""
This will convert the sklearn model into ONNX
"""

Use igel from python (instead of terminal)

  • Alternatively, you can also write code if you want to:
from igel import Igel

Igel(cmd="fit", data_path="path_to_your_dataset", yaml_path="path_to_your_yaml_file")
"""
check the examples folder for more
"""

Serve the model

The next step is to use your model in production. Igel helps you with this task too by providing the serve command. Running the serve command will tell igel to serve your model. Precisely, igel will automatically build a REST server and serve your model on a specific host and port, which you can configure by passing these as cli options.

The easiest way is to run:

$ igel serve --model_results_dir "path_to_model_results_directory"

Notice that igel needs the --model_results_dir or shortly -res_dir cli option in order to load the model and start the server. By default, igel will serve your model on localhost:8000, however, you can easily override this by providing a host and a port cli options.

$ igel serve --model_results_dir "path_to_model_results_directory" --host "127.0.0.1" --port 8000

Igel uses FastAPI for creating the REST server, which is a modern high performance framework and uvicorn to run it under the hood.


Using the API with the served model

This example was done using a pre-trained model (created by running igel init --target sick -type classification) and the Indian Diabetes dataset under examples/data. The headers of the columns in the original CSV are ‘preg’, ‘plas’, ‘pres’, ‘skin’, ‘test’, ‘mass’, ‘pedi’ and ‘age’.

CURL:

  • Post with single entry for each predictor
$ curl -X POST localhost:8080/predict --header "Content-Type:application/json" -d '{"preg": 1, "plas": 180, "pres": 50, "skin": 12, "test": 1, "mass": 456, "pedi": 0.442, "age": 50}'

Outputs: {"prediction":[[0.0]]}
  • Post with multiple options for each predictor
$ curl -X POST localhost:8080/predict --header "Content-Type:application/json" -d '{"preg": [1, 6, 10], "plas":[192, 52, 180], "pres": [40, 30, 50], "skin": [25, 35, 12], "test": [0, 1, 1], "mass": [456, 123, 155], "pedi": [0.442, 0.22, 0.19], "age": [50, 40, 29]}'

Outputs: {"prediction":[[1.0],[0.0],[0.0]]}

Caveats/Limitations:

  • each predictor used to train the model must make an appearance in your data (i.e. don’t leave any columns out)
  • each list must have the same number of elements or you’ll get an Internal Server Error
  • as an extension of this, you cannot mix single elements and lists (i.e. {“plas”: 0, “pres”: [1, 2]} isn't allowed)
  • the predict function takes a data path arg and reads in the data for you but with serving and calling your served model, you’ll have to parse the data into JSON yourself however, the python client provided in examples/python_client.py will do that for you

Example usage of the Python Client:

from python_client import IgelClient

# the client allows additional args with defaults:
# scheme="http", endpoint="predict", missing_values="mean"
client = IgelClient(host='localhost', port=8080)

# you can post other types of files compatible with what Igel data reading allows
client.post("my_batch_file_for_predicting.csv")

Outputs: <Response 200>: {"prediction":[[1.0],[0.0],[0.0]]}

Overview

The main goal of igel is to provide you with a way to train/fit, evaluate and use models without writing code. Instead, all you need is to provide/describe what you want to do in a simple yaml file.

Basically, you provide description or rather configurations in the yaml file as key value pairs. Here is an overview of all supported configurations (for now):

# dataset operations
dataset:
    type: csv  # [str] -> type of your dataset
    read_data_options: # options you want to supply for reading your data (See the detailed overview about this in the next section)
        sep:  # [str] -> Delimiter to use.
        delimiter:  # [str] -> Alias for sep.
        header:     # [int, list of int] -> Row number(s) to use as the column names, and the start of the data.
        names:  # [list] -> List of column names to use
        index_col: # [int, str, list of int, list of str, False] -> Column(s) to use as the row labels of the DataFrame,
        usecols:    # [list, callable] -> Return a subset of the columns
        squeeze:    # [bool] -> If the parsed data only contains one column then return a Series.
        prefix:     # [str] -> Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
        mangle_dupe_cols:   # [bool] -> Duplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than ‘X’…’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
        dtype:  # [Type name, dict maping column name to type] -> Data type for data or columns
        engine:     # [str] -> Parser engine to use. The C engine is faster while the python engine is currently more feature-complete.
        converters: # [dict] -> Dict of functions for converting values in certain columns. Keys can either be integers or column labels.
        true_values: # [list] -> Values to consider as True.
        false_values: # [list] -> Values to consider as False.
        skipinitialspace: # [bool] -> Skip spaces after delimiter.
        skiprows: # [list-like] -> Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.
        skipfooter: # [int] -> Number of lines at bottom of file to skip
        nrows: # [int] -> Number of rows of file to read. Useful for reading pieces of large files.
        na_values: # [scalar, str, list, dict] ->  Additional strings to recognize as NA/NaN.
        keep_default_na: # [bool] ->  Whether or not to include the default NaN values when parsing the data.
        na_filter: # [bool] -> Detect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.
        verbose: # [bool] -> Indicate number of NA values placed in non-numeric columns.
        skip_blank_lines: # [bool] -> If True, skip over blank lines rather than interpreting as NaN values.
        parse_dates: # [bool, list of int, list of str, list of lists, dict] ->  try parsing the dates
        infer_datetime_format: # [bool] -> If True and parse_dates is enabled, pandas will attempt to infer the format of the datetime strings in the columns, and if it can be inferred, switch to a faster method of parsing them.
        keep_date_col: # [bool] -> If True and parse_dates specifies combining multiple columns then keep the original columns.
        dayfirst: # [bool] -> DD/MM format dates, international and European format.
        cache_dates: # [bool] -> If True, use a cache of unique, converted dates to apply the datetime conversion.
        thousands: # [str] -> the thousands operator
        decimal: # [str] -> Character to recognize as decimal point (e.g. use ‘,’ for European data).
        lineterminator: # [str] -> Character to break file into lines.
        escapechar: # [str] ->  One-character string used to escape other characters.
        comment: # [str] -> Indicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether. This parameter must be a single character.
        encoding: # [str] -> Encoding to use for UTF when reading/writing (ex. ‘utf-8’).
        dialect: # [str, csv.Dialect] -> If provided, this parameter will override values (default or not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting
        delim_whitespace: # [bool] -> Specifies whether or not whitespace (e.g. ' ' or '    ') will be used as the sep
        low_memory: # [bool] -> Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference.
        memory_map: # [bool] -> If a filepath is provided for filepath_or_buffer, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead.

    random_numbers: # random numbers options in case you wanted to generate the same random numbers on each run
        generate_reproducible:  # [bool] -> set this to true to generate reproducible results
        seed:   # [int] -> the seed number is optional. A seed will be set up for you if you didn't provide any

    split:  # split options
        test_size: 0.2  #[float] -> 0.2 means 20% for the test data, so 80% are automatically for training
        shuffle: true   # [bool] -> whether to shuffle the data before/while splitting
        stratify: None  # [list, None] -> If not None, data is split in a stratified fashion, using this as the class labels.

    preprocess: # preprocessing options
        missing_values: mean    # [str] -> other possible values: [drop, median, most_frequent, constant] check the docs for more
        encoding:
            type: oneHotEncoding  # [str] -> other possible values: [labelEncoding]
        scale:  # scaling options
            method: standard    # [str] -> standardization will scale values to have a 0 mean and 1 standard deviation  | you can also try minmax
            target: inputs  # [str] -> scale inputs. | other possible values: [outputs, all] # if you choose all then all values in the dataset will be scaled


# model definition
model:
    type: classification    # [str] -> type of the problem you want to solve. | possible values: [regression, classification, clustering]
    algorithm: NeuralNetwork    # [str (notice the pascal case)] -> which algorithm you want to use. | type igel algorithms in the Terminal to know more
    arguments:          # model arguments: you can check the available arguments for each model by running igel help in your terminal
    use_cv_estimator: false     # [bool] -> if this is true, the CV class of the specific model will be used if it is supported
    cross_validate:
        cv: # [int] -> number of kfold (default 5)
        n_jobs:   # [signed int] -> The number of CPUs to use to do the computation (default None)
        verbose: # [int] -> The verbosity level. (default 0)
    hyperparameter_search:
        method: grid_search   # method you want to use: grid_search and random_search are supported
        parameter_grid:     # put your parameters grid here that you want to use, an example is provided below
            param1: [val1, val2]
            param2: [val1, val2]
        arguments:  # additional arguments you want to provide for the hyperparameter search
            cv: 5   # number of folds
            refit: true   # whether to refit the model after the search
            return_train_score: false   # whether to return the train score
            verbose: 0      # verbosity level

# target you want to predict
target:  # list of strings: basically put here the column(s), you want to predict that exist in your csv dataset
    - put the target you want to predict here
    - you can assign many target if you are making a multioutput prediction

Read Data Options

Note

igel uses pandas under the hood to read & parse the data. Hence, you can find this data optional parameters also in the pandas official documentation.

A detailed overview of the configurations you can provide in the yaml (or json) file is given below. Notice that you will certainly not need all the configuration values for the dataset. They are optional. Generally, igel will figure out how to read your dataset.

However, you can help it by providing extra fields using this read_data_options section. For example, one of the helpful values in my opinion is the "sep", which defines how your columns in the csv dataset are separated. Generally, csv datasets are separated by commas, which is also the default value here. However, it may be separated by a semicolon in your case.

Hence, you can provide this in the read_data_options. Just add the sep: ";" under read_data_options.

Supported Read Data Options

ParameterTypeExplanation
sepstr, default ‘,’Delimiter to use. If sep is None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators longer than 1 character and different from 's+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example: 'rt'.
delimiterdefault NoneAlias for sep.
headerint, list of int, default ‘infer’Row number(s) to use as the column names, and the start of the data. Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0 and column names are inferred from the first line of the file, if column names are passed explicitly then the behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names. The header can be a list of integers that specify row locations for a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.
namesarray-like, optionalList of column names to use. If the file contains a header row, then you should explicitly pass header=0 to override the column names. Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, default NoneColumn(s) to use as the row labels of the DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used. Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you have a malformed file with delimiters at the end of each line.
usecolslist-like or callable, optionalReturn a subset of the columns. If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in names or inferred from the document header row(s). For example, a valid list-like usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To instantiate a DataFrame from data with element order preserved use pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] order. If callable, the callable function will be evaluated against the column names, returning names where the callable function evaluates to True. An example of a valid callable argument would be lambda x: x.upper() in ['AAA', 'BBB', 'DDD']. Using this parameter results in much faster parsing time and lower memory usage.
squeezebool, default FalseIf the parsed data only contains one column then return a Series.
prefixstr, optionalPrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
mangle_dupe_colsbool, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than ‘X’…’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.
dtype{‘c’, ‘python’}, optionalParser engine to use. The C engine is faster while the python engine is currently more feature-complete.
convertersdict, optionalDict of functions for converting values in certain columns. Keys can either be integers or column labels.
true_valueslist, optionalValues to consider as True.
false_valueslist, optionalValues to consider as False.
skipinitialspacebool, default FalseSkip spaces after delimiter.
skiprowslist-like, int or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file. If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise. An example of a valid callable argument would be lambda x: x in [0, 2].
skipfooterint, default 0Number of lines at bottom of file to skip (Unsupported with engine=’c’).
nrowsint, optionalNumber of rows of file to read. Useful for reading pieces of large files.
na_valuesscalar, str, list-like, or dict, optionalAdditional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. By default the following values are interpreted as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.
keep_default_nabool, default TrueWhether or not to include the default NaN values when parsing the data. Depending on whether na_values is passed in, the behavior is as follows: If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing. If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing. If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing. If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN. Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.
na_filterbool, default TrueDetect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.
verbosebool, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesbool, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
parse_datesbool or list of int or names or list of lists or dict, default FalseThe behavior is as follows: boolean. If True -> try parsing the index. list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’ If a column or index cannot be represented as an array of datetimes, say because of an unparseable value or a mixture of timezones, the column or index will be returned unaltered as an object data type.
infer_datetime_formatbool, default FalseIf True and parse_dates is enabled, pandas will attempt to infer the format of the datetime strings in the columns, and if it can be inferred, switch to a faster method of parsing them. In some cases this can increase the parsing speed by 5-10x.
keep_date_colbool, default FalseIf True and parse_dates specifies combining multiple columns then keep the original columns.
date_parserfunction, optionalFunction to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. Pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments.
dayfirstbool, default FalseDD/MM format dates, international and European format.
cache_datesbool, default TrueIf True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets.
thousandsstr, optionalThousands separator.
decimalstr, default ‘.’Character to recognize as decimal point (e.g. use ‘,’ for European data).
lineterminatorstr (length 1), optionalCharacter to break file into lines. Only valid with C parser.
escapecharstr (length 1), optionalOne-character string used to escape other characters.
commentstr, optionalIndicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether.
encodingstr, optionalEncoding to use for UTF when reading/writing (ex. ‘utf-8’).
dialectstr or csv.Dialect, optionalIf provided, this parameter will override values (default or not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting
low_memorybool, default TrueInternally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the dtype parameter. Note that the entire file is read into a single DataFrame regardless,
memory_mapbool, default Falsemap the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead.

E2E Example

A complete end to end solution is provided in this section to prove the capabilities of igel. As explained previously, you need to create a yaml configuration file. Here is an end to end example for predicting whether someone have diabetes or not using the decision tree algorithm. The dataset can be found in the examples folder.

  • Fit/Train a model:
model:
    type: classification
    algorithm: DecisionTree

target:
    - sick
$ igel fit -dp path_to_the_dataset -yml path_to_the_yaml_file

That's it, igel will now fit the model for you and save it in a model_results folder in your current directory.

  • Evaluate the model:

Evaluate the pre-fitted model. Igel will load the pre-fitted model from the model_results directory and evaluate it for you. You just need to run the evaluate command and provide the path to your evaluation data.

$ igel evaluate -dp path_to_the_evaluation_dataset

That's it! Igel will evaluate the model and store statistics/results in an evaluation.json file inside the model_results folder

  • Predict:

Use the pre-fitted model to predict on new data. This is done automatically by igel, you just need to provide the path to your data that you want to use prediction on.

$ igel predict -dp path_to_the_new_dataset

That's it! Igel will use the pre-fitted model to make predictions and save it in a predictions.csv file inside the model_results folder

Advanced Usage

You can also carry out some preprocessing methods or other operations by providing them in the yaml file. Here is an example, where the data is split to 80% for training and 20% for validation/testing. Also, the data are shuffled while splitting.

Furthermore, the data are preprocessed by replacing missing values with the mean ( you can also use median, mode etc..). check this link for more information

# dataset operations
dataset:
    split:
        test_size: 0.2
        shuffle: True
        stratify: default

    preprocess: # preprocessing options
        missing_values: mean    # other possible values: [drop, median, most_frequent, constant] check the docs for more
        encoding:
            type: oneHotEncoding  # other possible values: [labelEncoding]
        scale:  # scaling options
            method: standard    # standardization will scale values to have a 0 mean and 1 standard deviation  | you can also try minmax
            target: inputs  # scale inputs. | other possible values: [outputs, all] # if you choose all then all values in the dataset will be scaled

# model definition
model:
    type: classification
    algorithm: RandomForest
    arguments:
        # notice that this is the available args for the random forest model. check different available args for all supported models by running igel help
        n_estimators: 100
        max_depth: 20

# target you want to predict
target:
    - sick

Then, you can fit the model by running the igel command as shown in the other examples

$ igel fit -dp path_to_the_dataset -yml path_to_the_yaml_file

For evaluation

$ igel evaluate -dp path_to_the_evaluation_dataset

For production

$ igel predict -dp path_to_the_new_dataset

Examples

In the examples folder in the repository, you will find a data folder,where the famous indian-diabetes, iris dataset and the linnerud (from sklearn) datasets are stored. Furthermore, there are end to end examples inside each folder, where there are scripts and yaml files that will help you get started.

The indian-diabetes-example folder contains two examples to help you get started:

  • The first example is using a neural network, where the configurations are stored in the neural-network.yaml file
  • The second example is using a random forest, where the configurations are stored in the random-forest.yaml file

The iris-example folder contains a logistic regression example, where some preprocessing (one hot encoding) is conducted on the target column to show you more the capabilities of igel.

Furthermore, the multioutput-example contains a multioutput regression example. Finally, the cv-example contains an example using the Ridge classifier using cross validation.

You can also find a cross validation and a hyperparameter search examples in the folder.

I suggest you play around with the examples and igel cli. However, you can also directly execute the fit.py, evaluate.py and predict.py if you want to.

Auto ML Examples

ImageClassification

First, create or modify a dataset of images that are categorized into sub-folders based on the image label/class For example, if you are have dogs and cats images, then you will need 2 sub-folders:

  • folder 0, which contains cats images (here the label 0 indicates a cat)
  • folder 1, which contains dogs images (here the label 1 indicates a dog)

Assuming these two sub-folder are contained in one parent folder called images, just feed data to igel:

$ igel auto-train -dp ./images --task ImageClassification

Igel will handle everything from pre-processing the data to optimizing hyperparameters. At the end, the best model will be stored in the current working dir.

TextClassification

First, create or modify a text dataset that are categorized into sub-folders based on the text label/class For example, if you are have a text dataset of positive and negative feedbacks, then you will need 2 sub-folders:

  • folder 0, which contains negative feedbacks (here the label 0 indicates a negative one)
  • folder 1, which contains positive feedbacks (here the label 1 indicates a positive one)

Assuming these two sub-folder are contained in one parent folder called texts, just feed data to igel:

$ igel auto-train -dp ./texts --task TextClassification

Igel will handle everything from pre-processing the data to optimizing hyperparameters. At the end, the best model will be stored in the current working dir.

GUI

You can also run the igel UI if you are not familiar with the terminal. Just install igel on your machine as mentioned above. Then run this single command in your terminal

$ igel gui

This will open up the gui, which is very simple to use. Check examples of how the gui looks like and how to use it here: https://github.com/nidhaloff/igel-ui

Running with Docker

  • Use the official image (recommended):

You can pull the image first from docker hub

$ docker pull nidhaloff/igel

Then use it:

$ docker run -it --rm -v $(pwd):/data nidhaloff/igel fit -yml 'your_file.yaml' -dp 'your_dataset.csv'
  • Alternatively, you can create your own image locally if you want:

You can run igel inside of docker by first building the image:

$ docker build -t igel .

And then running it and attaching your current directory (does not need to be the igel directory) as /data (the workdir) inside of the container:

$ docker run -it --rm -v $(pwd):/data igel fit -yml 'your_file.yaml' -dp 'your_dataset.csv'

Links

Help/GetHelp

If you are facing any problems, please feel free to open an issue. Additionally, you can make contact with the author for further information/questions.

Do you like igel? You can always help the development of this project by:

  • Following on github and/or twitter
  • Star the github repo
  • Watch the github repo for new releases
  • Tweet about the package
  • Help others with issues on github
  • Create issues and pull requests
  • Sponsor the project

Contributions

You think this project is useful and you want to bring new ideas, new features, bug fixes, extend the docs?

Contributions are always welcome. Make sure you read the guidelines first

Note

I'm also working on a GUI desktop app for igel based on people's requests. You can find it under Igel-UI.

Download Details:

Author: Nidhaloff
Source Code: https://github.com/nidhaloff/igel 
License: MIT license

#machinelearning #datascience #automation #neuralnetwork 

Igel: A Delightful ML tool That Allows You To Train, Test
Royce  Reinger

Royce Reinger

1673714355

LSTMs for Human Activity Recognition

LSTMs for Human Activity Recognition

Human Activity Recognition (HAR) using smartphones dataset and an LSTM RNN. Classifying the type of movement amongst six categories:

  • WALKING,
  • WALKING_UPSTAIRS,
  • WALKING_DOWNSTAIRS,
  • SITTING,
  • STANDING,
  • LAYING.

Compared to a classical approach, using a Recurrent Neural Networks (RNN) with Long Short-Term Memory cells (LSTMs) require no or almost no feature engineering. Data can be fed directly into the neural network who acts like a black box, modeling the problem correctly. Other research on the activity recognition dataset can use a big amount of feature engineering, which is rather a signal processing approach combined with classical data science techniques. The approach here is rather very simple in terms of how much was the data preprocessed.

Let's use Google's neat Deep Learning library, TensorFlow, demonstrating the usage of an LSTM, a type of Artificial Neural Network that can process sequential data / time series.

Video dataset overview

Follow this link to see a video of the 6 activities recorded in the experiment with one of the participants:

Video of the experiment

[Watch video]

Details about the input data

I will be using an LSTM on the data to learn (as a cellphone attached on the waist) to recognise the type of activity that the user is doing. The dataset's description goes like this:

The sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window). The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity. The gravitational force is assumed to have only low frequency components, therefore a filter with 0.3 Hz cutoff frequency was used.

That said, I will use the almost raw data: only the gravity effect has been filtered out of the accelerometer as a preprocessing step for another 3D feature as an input to help learning. If you'd ever want to extract the gravity by yourself, you could fork my code on using a Butterworth Low-Pass Filter (LPF) in Python and edit it to have the right cutoff frequency of 0.3 Hz which is a good frequency for activity recognition from body sensors.

What is an RNN?

As explained in this article, an RNN takes many input vectors to process them and output other vectors. It can be roughly pictured like in the image below, imagining each rectangle has a vectorial depth and other special hidden quirks in the image below. In our case, the "many to one" architecture is used: we accept time series of feature vectors (one vector per time step) to convert them to a probability vector at the output for classification. Note that a "one to one" architecture would be a standard feedforward neural network.

RNN Architectures Learn more on RNNs

What is an LSTM?

An LSTM is an improved RNN. It is more complex, but easier to train, avoiding what is called the vanishing gradient problem. I recommend this course for you to learn more on LSTMs.

Learn more on LSTMs

Results

Scroll on! Nice visuals awaits.

# All Includes

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf  # Version 1.0.0 (some previous versions are used in past commits)
from sklearn import metrics

import os
# Useful Constants

# Those are separate normalised input features for the neural network
INPUT_SIGNAL_TYPES = [
    "body_acc_x_",
    "body_acc_y_",
    "body_acc_z_",
    "body_gyro_x_",
    "body_gyro_y_",
    "body_gyro_z_",
    "total_acc_x_",
    "total_acc_y_",
    "total_acc_z_"
]

# Output classes to learn how to classify
LABELS = [
    "WALKING",
    "WALKING_UPSTAIRS",
    "WALKING_DOWNSTAIRS",
    "SITTING",
    "STANDING",
    "LAYING"
]

Let's start by downloading the data:

# Note: Linux bash commands start with a "!" inside those "ipython notebook" cells

DATA_PATH = "data/"

!pwd && ls
os.chdir(DATA_PATH)
!pwd && ls

!python download_dataset.py

!pwd && ls
os.chdir("..")
!pwd && ls

DATASET_PATH = DATA_PATH + "UCI HAR Dataset/"
print("\n" + "Dataset is now located at: " + DATASET_PATH)
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition
data     LSTM_files  LSTM_OLD.ipynb  README.md
LICENSE  LSTM.ipynb  lstm.py         screenlog.0
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data
download_dataset.py  source.txt

Downloading...
--2017-05-24 01:49:53--  https://archive.ics.uci.edu/ml/machine-learning-databases/00240/UCI%20HAR%20Dataset.zip
Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.249
Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.249|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 60999314 (58M) [application/zip]
Saving to: ‘UCI HAR Dataset.zip’

100%[======================================>] 60,999,314  1.69MB/s   in 38s    

2017-05-24 01:50:31 (1.55 MB/s) - ‘UCI HAR Dataset.zip’ saved [60999314/60999314]

Downloading done.

Extracting...
Extracting successfully done to /home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data/UCI HAR Dataset.
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data
download_dataset.py  __MACOSX  source.txt  UCI HAR Dataset  UCI HAR Dataset.zip
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition
data     LSTM_files  LSTM_OLD.ipynb  README.md
LICENSE  LSTM.ipynb  lstm.py         screenlog.0

Dataset is now located at: data/UCI HAR Dataset/

Preparing dataset:

TRAIN = "train/"
TEST = "test/"


# Load "X" (the neural network's training and testing inputs)

def load_X(X_signals_paths):
    X_signals = []

    for signal_type_path in X_signals_paths:
        file = open(signal_type_path, 'r')
        # Read dataset from disk, dealing with text files' syntax
        X_signals.append(
            [np.array(serie, dtype=np.float32) for serie in [
                row.replace('  ', ' ').strip().split(' ') for row in file
            ]]
        )
        file.close()

    return np.transpose(np.array(X_signals), (1, 2, 0))

X_train_signals_paths = [
    DATASET_PATH + TRAIN + "Inertial Signals/" + signal + "train.txt" for signal in INPUT_SIGNAL_TYPES
]
X_test_signals_paths = [
    DATASET_PATH + TEST + "Inertial Signals/" + signal + "test.txt" for signal in INPUT_SIGNAL_TYPES
]

X_train = load_X(X_train_signals_paths)
X_test = load_X(X_test_signals_paths)


# Load "y" (the neural network's training and testing outputs)

def load_y(y_path):
    file = open(y_path, 'r')
    # Read dataset from disk, dealing with text file's syntax
    y_ = np.array(
        [elem for elem in [
            row.replace('  ', ' ').strip().split(' ') for row in file
        ]],
        dtype=np.int32
    )
    file.close()

    # Substract 1 to each output class for friendly 0-based indexing
    return y_ - 1

y_train_path = DATASET_PATH + TRAIN + "y_train.txt"
y_test_path = DATASET_PATH + TEST + "y_test.txt"

y_train = load_y(y_train_path)
y_test = load_y(y_test_path)

Additionnal Parameters:

Here are some core parameter definitions for the training.

For example, the whole neural network's structure could be summarised by enumerating those parameters and the fact that two LSTM are used one on top of another (stacked) output-to-input as hidden layers through time steps.

# Input Data

training_data_count = len(X_train)  # 7352 training series (with 50% overlap between each serie)
test_data_count = len(X_test)  # 2947 testing series
n_steps = len(X_train[0])  # 128 timesteps per series
n_input = len(X_train[0][0])  # 9 input parameters per timestep


# LSTM Neural Network's internal structure

n_hidden = 32 # Hidden layer num of features
n_classes = 6 # Total classes (should go up, or should go down)


# Training

learning_rate = 0.0025
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300  # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000  # To show test set accuracy during training


# Some debugging info

print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
Some useful info to get an insight on dataset's shape and normalisation:
(X shape, y shape, every X's mean, every X's standard deviation)
(2947, 128, 9) (2947, 1) 0.0991399 0.395671
The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.

Utility functions for training:

def LSTM_RNN(_X, _weights, _biases):
    # Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
    # Moreover, two LSTM cells are stacked which adds deepness to the neural network.
    # Note, some code of this notebook is inspired from an slightly different
    # RNN architecture used on another dataset, some of the credits goes to
    # "aymericdamien" under the MIT license.

    # (NOTE: This step could be greatly optimised by shaping the dataset once
    # input shape: (batch_size, n_steps, n_input)
    _X = tf.transpose(_X, [1, 0, 2])  # permute n_steps and batch_size
    # Reshape to prepare input to hidden activation
    _X = tf.reshape(_X, [-1, n_input])
    # new shape: (n_steps*batch_size, n_input)

    # ReLU activation, thanks to Yu Zhao for adding this improvement here:
    _X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])
    # Split data because rnn cell needs a list of inputs for the RNN inner loop
    _X = tf.split(_X, n_steps, 0)
    # new shape: n_steps * (batch_size, n_hidden)

    # Define two stacked LSTM cells (two recurrent layers deep) with tensorflow
    lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
    lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
    lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
    # Get LSTM cell output
    outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)

    # Get last time step's output feature for a "many-to-one" style classifier,
    # as in the image describing RNNs at the top of this page
    lstm_last_output = outputs[-1]

    # Linear activation
    return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']


def extract_batch_size(_train, step, batch_size):
    # Function to fetch a "batch_size" amount of data from "(X|y)_train" data.

    shape = list(_train.shape)
    shape[0] = batch_size
    batch_s = np.empty(shape)

    for i in range(batch_size):
        # Loop index
        index = ((step-1)*batch_size + i) % len(_train)
        batch_s[i] = _train[index]

    return batch_s


def one_hot(y_, n_classes=n_classes):
    # Function to encode neural one-hot output labels from number indexes
    # e.g.:
    # one_hot(y_=[[5], [0], [3]], n_classes=6):
    #     return [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]

    y_ = y_.reshape(len(y_))
    return np.eye(n_classes)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

Let's get serious and build the neural network:


# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Graph weights
weights = {
    'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
    'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
    'hidden': tf.Variable(tf.random_normal([n_hidden])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = LSTM_RNN(x, weights, biases)

# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
    tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Hooray, now train the neural network:

# To keep track of training's performance
test_losses = []
test_accuracies = []
train_losses = []
train_accuracies = []

# Launch the graph
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
init = tf.global_variables_initializer()
sess.run(init)

# Perform Training steps with "batch_size" amount of example data at each loop
step = 1
while step * batch_size <= training_iters:
    batch_xs =         extract_batch_size(X_train, step, batch_size)
    batch_ys = one_hot(extract_batch_size(y_train, step, batch_size))

    # Fit training using batch data
    _, loss, acc = sess.run(
        [optimizer, cost, accuracy],
        feed_dict={
            x: batch_xs,
            y: batch_ys
        }
    )
    train_losses.append(loss)
    train_accuracies.append(acc)

    # Evaluate network only at some steps for faster training:
    if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters):

        # To not spam console, show training accuracy/loss in this "if"
        print("Training iter #" + str(step*batch_size) + \
              ":   Batch Loss = " + "{:.6f}".format(loss) + \
              ", Accuracy = {}".format(acc))

        # Evaluation on the test set (no learning made here - just evaluation for diagnosis)
        loss, acc = sess.run(
            [cost, accuracy],
            feed_dict={
                x: X_test,
                y: one_hot(y_test)
            }
        )
        test_losses.append(loss)
        test_accuracies.append(acc)
        print("PERFORMANCE ON TEST SET: " + \
              "Batch Loss = {}".format(loss) + \
              ", Accuracy = {}".format(acc))

    step += 1

print("Optimization Finished!")

# Accuracy for test data

one_hot_predictions, accuracy, final_loss = sess.run(
    [pred, accuracy, cost],
    feed_dict={
        x: X_test,
        y: one_hot(y_test)
    }
)

test_losses.append(final_loss)
test_accuracies.append(accuracy)

print("FINAL RESULT: " + \
      "Batch Loss = {}".format(final_loss) + \
      ", Accuracy = {}".format(accuracy))
WARNING:tensorflow:From <ipython-input-19-3339689e51f6>:9: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Training iter #1500:   Batch Loss = 5.416760, Accuracy = 0.15266665816307068
PERFORMANCE ON TEST SET: Batch Loss = 4.880829811096191, Accuracy = 0.05632847175002098
Training iter #30000:   Batch Loss = 3.031930, Accuracy = 0.607333242893219
PERFORMANCE ON TEST SET: Batch Loss = 3.0515167713165283, Accuracy = 0.6067186594009399
Training iter #60000:   Batch Loss = 2.672764, Accuracy = 0.7386666536331177
PERFORMANCE ON TEST SET: Batch Loss = 2.780435085296631, Accuracy = 0.7027485370635986
Training iter #90000:   Batch Loss = 2.378301, Accuracy = 0.8366667032241821
PERFORMANCE ON TEST SET: Batch Loss = 2.6019773483276367, Accuracy = 0.7617915868759155
Training iter #120000:   Batch Loss = 2.127290, Accuracy = 0.9066667556762695
PERFORMANCE ON TEST SET: Batch Loss = 2.3625404834747314, Accuracy = 0.8116728663444519
Training iter #150000:   Batch Loss = 1.929805, Accuracy = 0.9380000233650208
PERFORMANCE ON TEST SET: Batch Loss = 2.306251049041748, Accuracy = 0.8276212215423584
Training iter #180000:   Batch Loss = 1.971904, Accuracy = 0.9153333902359009
PERFORMANCE ON TEST SET: Batch Loss = 2.0835530757904053, Accuracy = 0.8771631121635437
Training iter #210000:   Batch Loss = 1.860249, Accuracy = 0.8613333702087402
PERFORMANCE ON TEST SET: Batch Loss = 1.9994492530822754, Accuracy = 0.8788597583770752
Training iter #240000:   Batch Loss = 1.626292, Accuracy = 0.9380000233650208
PERFORMANCE ON TEST SET: Batch Loss = 1.879166603088379, Accuracy = 0.8944689035415649
Training iter #270000:   Batch Loss = 1.582758, Accuracy = 0.9386667013168335
PERFORMANCE ON TEST SET: Batch Loss = 2.0341007709503174, Accuracy = 0.8361043930053711
Training iter #300000:   Batch Loss = 1.620352, Accuracy = 0.9306666851043701
PERFORMANCE ON TEST SET: Batch Loss = 1.8185184001922607, Accuracy = 0.8639293313026428
Training iter #330000:   Batch Loss = 1.474394, Accuracy = 0.9693333506584167
PERFORMANCE ON TEST SET: Batch Loss = 1.7638503313064575, Accuracy = 0.8747878670692444
Training iter #360000:   Batch Loss = 1.406998, Accuracy = 0.9420000314712524
PERFORMANCE ON TEST SET: Batch Loss = 1.5946787595748901, Accuracy = 0.902273416519165
Training iter #390000:   Batch Loss = 1.362515, Accuracy = 0.940000057220459
PERFORMANCE ON TEST SET: Batch Loss = 1.5285792350769043, Accuracy = 0.9046487212181091
Training iter #420000:   Batch Loss = 1.252860, Accuracy = 0.9566667079925537
PERFORMANCE ON TEST SET: Batch Loss = 1.4635565280914307, Accuracy = 0.9107565879821777
Training iter #450000:   Batch Loss = 1.190078, Accuracy = 0.9553333520889282
...
PERFORMANCE ON TEST SET: Batch Loss = 0.42567864060401917, Accuracy = 0.9324736595153809
Training iter #2070000:   Batch Loss = 0.342763, Accuracy = 0.9326667189598083
PERFORMANCE ON TEST SET: Batch Loss = 0.4292983412742615, Accuracy = 0.9273836612701416
Training iter #2100000:   Batch Loss = 0.259442, Accuracy = 0.9873334169387817
PERFORMANCE ON TEST SET: Batch Loss = 0.44131210446357727, Accuracy = 0.9273836612701416
Training iter #2130000:   Batch Loss = 0.284630, Accuracy = 0.9593333601951599
PERFORMANCE ON TEST SET: Batch Loss = 0.46982717514038086, Accuracy = 0.9093992710113525
Training iter #2160000:   Batch Loss = 0.299012, Accuracy = 0.9686667323112488
PERFORMANCE ON TEST SET: Batch Loss = 0.48389002680778503, Accuracy = 0.9138105511665344
Training iter #2190000:   Batch Loss = 0.287106, Accuracy = 0.9700000286102295
PERFORMANCE ON TEST SET: Batch Loss = 0.4670214056968689, Accuracy = 0.9216151237487793
Optimization Finished!
FINAL RESULT: Batch Loss = 0.45611169934272766, Accuracy = 0.9165252447128296

Training is good, but having visual insight is even better:

Okay, let's plot this simply in the notebook for now.

# (Inline plots: )
%matplotlib inline

font = {
    'family' : 'Bitstream Vera Sans',
    'weight' : 'bold',
    'size'   : 18
}
matplotlib.rc('font', **font)

width = 12
height = 12
plt.figure(figsize=(width, height))

indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size))
plt.plot(indep_train_axis, np.array(train_losses),     "b--", label="Train losses")
plt.plot(indep_train_axis, np.array(train_accuracies), "g--", label="Train accuracies")

indep_test_axis = np.append(
    np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1]),
    [training_iters]
)
plt.plot(indep_test_axis, np.array(test_losses),     "b-", label="Test losses")
plt.plot(indep_test_axis, np.array(test_accuracies), "g-", label="Test accuracies")

plt.title("Training session's progress over iterations")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Training Progress (Loss or Accuracy values)')
plt.xlabel('Training iteration')

plt.show()

LSTM Training Testing Comparison Curve

And finally, the multi-class confusion matrix and metrics!

# Results

predictions = one_hot_predictions.argmax(1)

print("Testing Accuracy: {}%".format(100*accuracy))

print("")
print("Precision: {}%".format(100*metrics.precision_score(y_test, predictions, average="weighted")))
print("Recall: {}%".format(100*metrics.recall_score(y_test, predictions, average="weighted")))
print("f1_score: {}%".format(100*metrics.f1_score(y_test, predictions, average="weighted")))

print("")
print("Confusion Matrix:")
confusion_matrix = metrics.confusion_matrix(y_test, predictions)
print(confusion_matrix)
normalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100

print("")
print("Confusion matrix (normalised to % of total test data):")
print(normalised_confusion_matrix)
print("Note: training and testing data is not equally distributed amongst classes, ")
print("so it is normal that more than a 6th of the data is correctly classifier in the last category.")

# Plot Results:
width = 12
height = 12
plt.figure(figsize=(width, height))
plt.imshow(
    normalised_confusion_matrix,
    interpolation='nearest',
    cmap=plt.cm.rainbow
)
plt.title("Confusion matrix \n(normalised to % of total test data)")
plt.colorbar()
tick_marks = np.arange(n_classes)
plt.xticks(tick_marks, LABELS, rotation=90)
plt.yticks(tick_marks, LABELS)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Testing Accuracy: 91.65252447128296%

Precision: 91.76286479743305%
Recall: 91.65252799457076%
f1_score: 91.6437546304815%

Confusion Matrix:
[[466   2  26   0   2   0]
 [  5 441  25   0   0   0]
 [  1   0 419   0   0   0]
 [  1   1   0 396  87   6]
 [  2   1   0  87 442   0]
 [  0   0   0   0   0 537]]

Confusion matrix (normalised to % of total test data):
[[ 15.81269073   0.06786563   0.88225317   0.           0.06786563   0.        ]
 [  0.16966406  14.96437073   0.84832031   0.           0.           0.        ]
 [  0.03393281   0.          14.21784878   0.           0.           0.        ]
 [  0.03393281   0.03393281   0.          13.43739319   2.95215464
    0.20359688]
 [  0.06786563   0.03393281   0.           2.95215464  14.99830341   0.        ]
 [  0.           0.           0.           0.           0.          18.22192001]]
Note: training and testing data is not equally distributed amongst classes,
so it is normal that more than a 6th of the data is correctly classifier in the last category.

Confusion Matrix

sess.close()

Conclusion

Outstandingly, the final accuracy is of 91%! And it can peak to values such as 93.25%, at some moments of luck during the training, depending on how the neural network's weights got initialized at the start of the training, randomly.

This means that the neural networks is almost always able to correctly identify the movement type! Remember, the phone is attached on the waist and each series to classify has just a 128 sample window of two internal sensors (a.k.a. 2.56 seconds at 50 FPS), so it amazes me how those predictions are extremely accurate given this small window of context and raw data. I've validated and re-validated that there is no important bug, and the community used and tried this code a lot. (Note: be sure to report something in the issue tab if you find bugs, otherwise Quora, StackOverflow, and other StackExchange sites are the places for asking questions.)

I specially did not expect such good results for guessing between the labels "SITTING" and "STANDING". Those are seemingly almost the same thing from the point of view of a device placed at waist level according to how the dataset was originally gathered. Thought, it is still possible to see a little cluster on the matrix between those classes, which drifts away just a bit from the identity. This is great.

It is also possible to see that there was a slight difficulty in doing the difference between "WALKING", "WALKING_UPSTAIRS" and "WALKING_DOWNSTAIRS". Obviously, those activities are quite similar in terms of movements.

I also tried my code without the gyroscope, using only the 3D accelerometer's 6 features (and not changing the training hyperparameters), and got an accuracy of 87%. In general, gyroscopes consumes more power than accelerometers, so it is preferable to turn them off.

Improvements

In another open-source repository of mine, the accuracy is pushed up to nearly 94% using a special deep LSTM architecture which combines the concepts of bidirectional RNNs, residual connections, and stacked cells. This architecture is also tested on another similar activity dataset. It resembles the nice architecture used in "Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", without an attention mechanism, and with just the encoder part - as a "many to one" architecture instead of a "many to many" to be adapted to the Human Activity Recognition (HAR) problem. I also worked more on the problem and came up with the LARNN, however it's complicated for just a little gain. Thus the current, original activity recognition project is simply better to use for its simplicity. We've also coded a non-deep learning machine learning pipeline on the same datasets using classical featurization techniques and older machine learning algorithms.

If you want to learn more about deep learning, I have also built a list of the learning ressources for deep learning which have revealed to be the most useful to me here.

References

The dataset can be found on the UCI Machine Learning Repository:

Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. A Public Domain Dataset for Human Activity Recognition Using Smartphones. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.

Citation

Copyright (c) 2016 Guillaume Chevalier. To cite my code, you can point to the URL of the GitHub repository, for example:

Guillaume Chevalier, LSTMs for Human Activity Recognition, 2016, https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

My code is available for free and even for private usage for anyone under the MIT License, however I ask to cite for using the code.

Here is the BibTeX citation code:

@misc{chevalier2016lstms,
  title={LSTMs for human activity recognition},
  author={Chevalier, Guillaume},
  year={2016}
}

I've also published a second paper, with contributors, regarding a second iteration as an improvement of this work, with deeper neural networks. The paper is available on arXiv. Here is the BibTeX citation code for this newer piece of work based on this project:

@article{DBLP:journals/corr/abs-1708-08989,
  author    = {Yu Zhao and
               Rennong Yang and
               Guillaume Chevalier and
               Maoguo Gong},
  title     = {Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable
               Sensors},
  journal   = {CoRR},
  volume    = {abs/1708.08989},
  year      = {2017},
  url       = {http://arxiv.org/abs/1708.08989},
  archivePrefix = {arXiv},
  eprint    = {1708.08989},
  timestamp = {Mon, 13 Aug 2018 16:46:48 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1708-08989},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Extra links

Connect with me

Liked this project? Did it help you? Leave a star, fork and share the love!

This activity recognition project has been seen in:


# Let's convert this notebook to a README automatically for the GitHub project's title page:
!jupyter nbconvert --to markdown LSTM.ipynb
!mv LSTM.md README.md
[NbConvertApp] Converting notebook LSTM.ipynb to markdown
[NbConvertApp] Support files will be in LSTM_files/
[NbConvertApp] Making directory LSTM_files
[NbConvertApp] Making directory LSTM_files
[NbConvertApp] Writing 38654 bytes to LSTM.md

Download Details:

Author: Guillaume-chevalier
Source Code: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition 
License: MIT license

#machinelearning #deeplearning #neuralnetwork #tensorflow 

LSTMs for Human Activity Recognition
Royce  Reinger

Royce Reinger

1673254680

Vector Search Engine & Database for The Next Generation Of AI Apps

qdrant

Vector Search Engine for the next generation of AI applications

Qdrant (read: quadrant ) is a vector similarity search engine and vector database. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural-network or semantic-based matching, faceted search, and other applications.

Qdrant is written in Rust 🦀, which makes it fast and reliable even under high load.

With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more!

Demo Projects

Semantic Text Search 🔍

The neural search uses semantic embeddings instead of keywords and works best with short texts. With Qdrant and a pre-trained neural network, you can build and deploy semantic neural search on your data in minutes. Try it online!

Similar Image Search - Food Discovery 🍕

There are multiple ways to discover things, text search is not the only one. In the case of food, people rely more on appearance than description and ingredients. So why not let people choose their next lunch by its appearance, even if they don’t know the name of the dish? Check it out!

Extreme classification - E-commerce Product Categorization 📺

Extreme classification is a rapidly growing research area within machine learning focusing on multi-class and multi-label problems involving an extremely large number of labels. Sometimes it is millions and tens of millions classes. The most promising way to solve this problem is to use similarity learning models. We put together a demo example of how you could approach the problem with a pre-trained transformer model and Qdrant. So you can play with it online!

More solutions

Semantic Text SearchSimilar Image SearchRecommendations
Chat BotsMatching EnginesAnomaly Detection

API

REST

Online OpenAPI 3.0 documentation is available here. OpenAPI makes it easy to generate a client for virtually any framework or programing language.

You can also download raw OpenAPI definitions.

gRPC

For faster production-tier searches, Qdrant also provides a gRPC interface. You can find gRPC documentation here.

Clients

Qdrant offers the following client libraries to help you integrate it into your application stack with ease:

Features

Filtering and Payload

Qdrant supports any JSON payload associated with vectors. It does not only store payload but also allows filter results based on payload values. It allows any combinations of should, must, and must_not conditions, but unlike ElasticSearch post-filtering, Qdrant guarantees all relevant vectors are retrieved.

Rich Data Types

Vector payload supports a large variety of data types and query conditions, including string matching, numerical ranges, geo-locations, and more. Payload filtering conditions allow you to build almost any custom business logic that should work on top of similarity matching.

Query Planning and Payload Indexes

Using the information about the stored payload values, the query planner decides on the best way to execute the query. For example, if the search space limited by filters is small, it is more efficient to use a full brute force than an index.

SIMD Hardware Acceleration

Qdrant can take advantage of modern CPU x86-x64 architectures. It allows you to search even faster on modern hardware.

Write-Ahead Logging

Once the service confirmed an update - it won't lose data even in case of power shut down. All operations are stored in the update journal and the latest database state could be easily reconstructed at any moment.

Distributed Deployment

Since v0.8.0 Qdrant supports distributed deployment. In this mode, multiple Qdrant machines are joined into a cluster to provide horizontal scaling. Coordination with the distributed consensus is provided by the Raft protocol.

Stand-alone

Qdrant does not rely on any external database or orchestration controller, which makes it very easy to configure.

Usage

Docker 🐳

Build your own from source

docker build . --tag=qdrant/qdrant

Or use latest pre-built image from DockerHub

docker pull qdrant/qdrant

To run the container, use the command:

docker run -p 6333:6333 qdrant/qdrant

And once you need a fine-grained setup, you can also define a storage path and custom configuration:

docker run -p 6333:6333 \
    -v $(pwd)/path/to/data:/qdrant/storage \
    -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \
    qdrant/qdrant
  • /qdrant/storage - is a place where Qdrant persists all your data. Make sure to mount it as a volume, otherwise docker will drop it with the container.
  • /qdrant/config/production.yaml - is the file with engine configuration. You can override any value from the reference config

Now Qdrant should be accessible at localhost:6333.

Docs 📓

Contacts

Building something special with Qdrant? We can help!

Also available as managed solution in the Qdrant Cloud https://qdrant.to/cloud

Download Details:

Author: qdrant
Source Code: https://github.com/qdrant/qdrant 
License: Apache-2.0 license

#machinelearning #search #engine #neuralnetwork 

Vector Search Engine & Database for The Next Generation Of AI Apps
Royce  Reinger

Royce Reinger

1673019900

Plain Python Implementations Of Basic Machine Learning Algorithms

Machine learning basics

This repository contains implementations of basic machine learning algorithms in plain Python (Python Version 3.6+). All algorithms are implemented from scratch without using additional machine learning libraries. The intention of these notebooks is to provide a basic understanding of the algorithms and their underlying structure, not to provide the most efficient implementations.

alt text

Data preprocessing

After several requests I started preparing notebooks on how to preprocess datasets for machine learning. Within the next months I will add one notebook for each kind of dataset (text, images, ...). As before, the intention of these notebooks is to provide a basic understanding of the preprocessing steps, not to provide the most efficient implementations.

alt text

Live demo

Run the notebooks online without having to clone the repository or install jupyter: Binder.

Note: this does not work for the data_preprocessing.ipynb and image_preprocessing.ipynb notebooks because they require downloading a dataset first.

Feedback

If you have a favorite algorithm that should be included or spot a mistake in one of the notebooks, please let me know by creating a new issue.



Download Details:

Author: Zotroneneis
Source Code: https://github.com/zotroneneis/machine_learning_basics 
License: MIT license

#machinelearning #python #algorithm #neuralnetwork 

Plain Python Implementations Of Basic Machine Learning Algorithms