iPhone App Development Services | Hire iPhone App Developer India

We provide end-to-end iPhone app development services for iOS devices. Hire our iOS/iPhone app developers to build innovative custom iOS apps. Expert App Devs is a leading iPhone app development services partner that provides businesses with secure and scalable solutions. We fuel your apps with the right technology stack, architecture layout, and interface design to improve downloads, maximize retention and enhance customer lifetime value for your business. Our iPhone-first solutions are engineered to make your business future-ready.

#iosappdevelopment #hireiosappdeveloppers #expertappdevs #business #iosapplication #technologies #usa #uk #uae #iphone #iosappdeveloper

iPhone App Development Services | Hire iPhone App Developer India
Jonas  Wald

Jonas Wald

1659445406

How Kucoin clone script is best for starting crypto business?

After going through various articles and information regarding the initiation of the crypto exchange #business like Kucoin, the #kucoin clone script might be the term that has crossed often. Being an entrepreneur, one would’nt engage with any specific method just because they recognized it. A detailed analysis is required to ensure its efficiency.

Likely, the #kucoin clone script might be the term that could have come across every entrepreneur’s business journey. Speaking of which, it is the best result-oriented development methodology. But, there exist much confusion, whether this Kucoin clone script will be the best choice or not. Without a delay let’s dive into this topic deeper,

As you might know, a Kucoin clone script is a pre-fabricated #crypto exchange software that comprises all essential features for a #cryptoexchange like Kucoin to run seamlessly.

Features of Kucoin clone script

  • High-Performance Matching Engine
  • Spot Trading
  • Margin Trading
  • Futures Trading
  • P2P Trading
  • OTC Trading
  • User Dashboard
  • Admin Dashboard
  • Extended Trade View
  • KYC/AML
  • Referral program
  • Crypto/Fiat Payment Gateway integration
  • Buy/Sell advertisements
  • User to user exchange BUY/SELL


Security Features

  • Jail Login
  • Two-Factor Authentication
  • Cloudflare Integration
  • SQL Injection Prevention
  • End-To-End Encryption Based SSL
  • Anti Denial Of Service(Dos)
  • Cross-Site Request Forgery Protection
  • Server-Side Request Forgery Protection
  • Anti Distributed Denial Of Service

Apart from these features, there are various other benefits that made this Kucoin clone software the best fit for starting a crypto exchange business.

Easy Customization

With the help of the Kucoin clone script, you will be able to make the necessary customizations to your crypto exchange. Apart from the in-built features, more security features can be added to enhance the exchange’s competency.


Instant Deployment

Making use of this Kucoin clone script, your overall time period for launching a crypto exchange reduces to the ground level. Being a prefabricated one, after making required changes and customizations your crypto exchange will be ready for launch with complete perfection.


Cost-effective

Apart from other benefits, the Kucoin clone script supports the majority of the budding entrepreneurs with its affordable budget. Instead of spending a pile of money with the other development methodologies, making use of this Kucoin clone script helps to save a huge portion of your budget.


High success Ratio

As this Kucoin clone script is developed with a skillful team of experts, the script itself exists with a professional touch. With such a masterpiece you could possibly stand out from the crowd when compared to other amateur competitors. Grabbing a wide volume of traders to your exchange stands as the main factor for your success.

To get these benefits offered by a Kucoin clone script, all you have to do is to pick the best crypto exchange clone script provider among the various ones.
 

Equity Risks one needs to be aware of while Investing

If you are here, you've probably considered investing in equity shares. You, like many others, believe that equity investments are a foolproof way of bringing about steady returns in the long run.

However, many feel hesitant because of the risks associated with equity investments. Perhaps you are put off by the disclaimer that comes at the end of a financial commercial or maybe someone consistently talked bad about investing in equities. 

To reassure yourself, you take yourself to the internet and land up here, at the right place! This blog will present all the details you cannot go through on your own and help you make informed decisions.

Types of Risks: 

Equity Investments involve two types of risks: Systematic Risk and Unsystematic Risk. You'll learn more about them below:

1. Systematic Risk:

This may be a new term for you. Systematic Risk, otherwise known as Market risk, affects all the stocks as well as the overall market directly or indirectly. It means all the companies will be affected rather than specific ones.

The economical and political environment, interest rates, and inflation, such factors can affect the market prices. So, systematic risk is known to be unpredictable and easy to miss.

 2. Unsystematic Risk

Now comes a risk you might be familiar with but never learned the name of. The risk specific to a particular company is called Unsystemtic Risk. It can also be an industry. These risks will show when the company undergoes problems or has uncertainties.

Being the counterpart of Systematic Risk, this does not affect the whole market but just a part of it. If the management faces any changes or breaks down, products are recalled, new competitors have emerged or internal strikes are conducted- all count as actions of unsystematic risk.

A famous example that you can relate to is the turmoil the Indian telecommunication sector is going through. Large players are providing low-cost services, affecting the profit of small players.

This is just one example of many. Next, you will learn how to handle the risks.

Handling the Risk

To avoid such risks, you can plan to select assets based on your investment goals, time frame, and risk tolerance. Diversification will go a long way in helping you handle the risks as it includes a wide variety of assets. All your assets will not be affected when the market is. When a particular set of stocks is not performing well, you can rely on other stocks to help you compensate for the loss.

One more thing you can do is to keep the funds locked in where you won't be able to sell your shares for a time. You can choose to sell the funds when the value increases in the future. Include doing thorough research as it will help you determine the various factors that affect equity investments.

Conclusion 

Investing in equities involves both risk and reward. As a beginner or veteran investor, you must plan to stay for the long term. Make sure you do your research and seek advice from a brokerage firm that supports and teaches you the nuances of investing in the stock market.

As time goes on, you will see how valuable equity investments are. Choose a brokerage firm that not only offers the lowest brokerage for trading in India but also gives you a smooth investing experience. Such a firm is Goodwill Wealth Management. 

 #finance  #business 

 Equity Risks one needs to be aware of while Investing
Emilie  Okumu

Emilie Okumu

1658433000

Modeling Sequential Iterative Neural Networks in TensorFlow

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Note: You can find here the accompanying seq2seq RNN forecasting presentation's slides, as well as the Google Colab file for running the present notebook (if you're not already in Colab).

This is a series of exercises that you can try to solve to learn how to code Encoder-Decoder Sequence to Sequence Recurrent Neural Networks (seq2seq RNNs). You can solve different simple toy signal prediction problems. Seq2seq architectures may also be used for other sophisticated purposes, such as for Natural Language Processing (NLP).

In this project are given 4 exercises of gradually increasing difficulty. I take for granted that you have at least some knowledge of how RNN works and how can they be shaped into an encoder and a decoder seq2seq setup of the most simple form (without attention). To learn more about RNNs in TensorFlow, you may want to visit this other RNN project which I have built for that.

The current project is a series of example I have first built in French, but I haven't got the time to generate all the charts anew with proper English text. I have built this project at first for the practical part of the third hour of a master class conference I presented at the Web At Quebec (WAQ), originally in March 2017.

How to use this ".ipynb" Python notebook ?

I made available an ".py" Python version of this tutorial within the repository, but it's more convenient to run the code inside the notebook or within Google Colab.

For running the notebook, you can run jupyter-notebook in the command-line to launch the web notebook IDE, and choose the .ipynb file. For Google Colab, if you want to run the code using GPU, make sure to do Runtime > Change Runtime Type and to select GPU for Python 3.

Exercises

Note that the dataset changes in function of the exercice. Most of the time, you will have to edit the neural networks' training parameter to succeed in doing the exercise, but at a certain point, changes in the architecture itself will be asked and required. The datasets used for this exercises are found in datasets.py.

Exercise 1

In theory, it is possible to create a perfect prediction of the signal for this exercise as it is deterministic. The neural network's parameters has been set to "somehow" acceptable values for a first training. You'll want to play with the hyperparameters until you reach predictions like those:

Note: the neural network sees only what is to the left of the chart and is trained to predict what is at the right (predictions in yellow).

We have 2 time series at once to predict, which are tied together. That means our neural network processes multidimensional data. A simple example would be to receive as an argument the past values of multiple stock market symbols in order to predict the future values of all those symbols with the neural network, which values are evolving together in time. That is what we will do in the exercise 4 with USD and EUR values of the BTC that we'll see both at once.

Exercise 2

Here, rather than 2 signals in parallel to predict, we have only one, for simplicity. HOWEVER, this signal is a superposition of two sine waves of varying wavelenght and offset (and restricted to a particular min and max limit of wavelengts).

In order to finish this exercise properly, you will need to edit the neural network's hyperparameters. We would recommend first trying with hyperparameters like those:

  • n_samples = 125000
  • epochs = 1
  • batch_size = 50
  • hidden_dim = 35

Here are predictions achieved with a bigger neural networks with 3 stacked recurrent cells and a width of 500 hidden units for each of those cells:

Note that it would be possible to obtain better results with a smaller neural network, provided better training hyperparameters and a longer training, adding dropout and a few things, and on.

Exercise 3

This exercise is similar to the previous one, except that the input data given to the encoder is noisy. The expected output is NOT noisy. This makes the task a bit harder. In this specific data context, we can call our neuralnetwork a denoising autoregressive autoencoder. Here is a good example of what a training example (and a prediction) could now looks like :

Therefore the neural network is brought to denoise the signal to interpret its future smooth values. Here are some example of better predictions on this version of the dataset :

Similarly as I said for the exercise 2, it would be possible here too to obtain better results. Note that it would also have been possible to ask you to predict to reconstruct the denoised signal from the noisy input (rather than trying to predict the future values of it) as a denoising autoencoder. This type of architecture is also useful for data compression, such as manipulating images, for instance.

Exercise 4

This exercise is much harder than the previous ones and is built more as an open-ended suggestion. It is to predict the future value of the Bitcoin's price. We have here some daily market data of the bitcoin's value, that is, BTC/USD and BTC/EUR. This is not enough to build a good predictor - at least having data precise at the minute level, or second level, would be more interesting. Here is a prediction that was made on the actual future values, the neural network has not been trained on the future values shown here so this is a legitimate prediction, given a well-enough model trained on the task:

Disclaimer: this prediction of the future values was really good and you should not expect predictions to be always that good using as few data as actually (side note: the other prediction charts in this project are all "average" except this one). I mostly didn't really took the time to compare this model to other financial models. For this exercise, you can try to plug more valuable financial data into the model in order to make more accurate predictions. Let me remind you that I provided the code for the datasets in datasets.py, but that could be replaced with more comprehensive data for predicting more accurately the Bitcoin.

The input and output dimensions of the model is 2D accepts (BTC/USD and BTC/EUR). As an example, you could create additionnal input dimensions/streams which could contain meteo data and more financial data, such as the S&P 500, the Dow Jones, and so on. Other more creative input data could be sine waves (or other-type-shaped waves such as saw waves or triangles or two signals for cos and sin) representing the fluctuation of minutes, hours, days, weeks, months, years, moon cycles, and on (as we did in Neuraxio's Time Series Solution). This could be combined with a stream of social media sentiment analysis about the word "Bitcoin" to have another input signal which is more human-based and abstract. It is also interesting to know where is the bitcoin most used.

With all the above-mentionned examples, it would be possible to have all of this as input features, at every time steps: (BTC/USD, BTC/EUR, Dow_Jones, SP_500, hour_of_day, day_of_week, day_of_month, week_of_year, year, moon_cycle, meteo_USA, meteo_EUROPE, social_sentiment). Finally, there could be those two output features, or more: (BTC/USD, BTC/EUR).

This prediction concept and similar time series forecasting algorithms can apply to many many things, such as auto-correcting machines for Industry 4.0, quality assurance in production chains, traffic forecast, meteo prediction, movements and action prediction, and lots of other types of shot-term and mid-term statistical predictions or forecasts.

Install Requirements

!pip install tensorflow-gpu==2.1 neuraxle==0.3.1 neuraxle_tensorflow==0.1.0
Requirement already satisfied: tensorflow-gpu==2.1 in /usr/local/lib/python3.6/dist-packages (2.1.0)
Requirement already satisfied: neuraxle==0.3.1 in /usr/local/lib/python3.6/dist-packages (0.3.1)
Requirement already satisfied: neuraxle_tensorflow==0.1.0 in /usr/local/lib/python3.6/dist-packages (0.1.0)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (0.1.8)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (3.1.0)
Requirement already satisfied: tensorboard<2.2.0,>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (2.1.0)
Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (0.2.2)
Requirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (3.10.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.1.0)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.11.2)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (0.9.0)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.12.0)
Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.0.8)
Requirement already satisfied: tensorflow-estimator<2.2.0,>=2.1.0rc0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (2.1.0)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (0.8.1)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.1.0)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.17.5)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (0.33.6)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.15.0)
Requirement already satisfied: scipy==1.4.1; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.1) (1.4.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from neuraxle==0.3.1) (3.1.2)
Requirement already satisfied: Flask-RESTful>=0.3.7 in /usr/local/lib/python3.6/dist-packages (from neuraxle==0.3.1) (0.3.7)
Requirement already satisfied: conv==0.2 in /usr/local/lib/python3.6/dist-packages (from neuraxle==0.3.1) (0.2)
Requirement already satisfied: Flask>=1.1.1 in /usr/local/lib/python3.6/dist-packages (from neuraxle==0.3.1) (1.1.1)
Requirement already satisfied: joblib>=0.13.2 in /usr/local/lib/python3.6/dist-packages (from neuraxle==0.3.1) (0.14.1)
Requirement already satisfied: scikit-learn>=0.20.3 in /usr/local/lib/python3.6/dist-packages (from neuraxle==0.3.1) (0.22.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (0.16.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (1.10.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (0.4.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (3.1.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (42.0.2)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (2.21.0)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow-gpu==2.1) (2.8.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->neuraxle==0.3.1) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->neuraxle==0.3.1) (2.4.6)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->neuraxle==0.3.1) (2.6.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->neuraxle==0.3.1) (0.10.0)
Requirement already satisfied: aniso8601>=0.82 in /usr/local/lib/python3.6/dist-packages (from Flask-RESTful>=0.3.7->neuraxle==0.3.1) (8.0.0)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from Flask-RESTful>=0.3.7->neuraxle==0.3.1) (2018.9)
Requirement already satisfied: click>=5.1 in /usr/local/lib/python3.6/dist-packages (from Flask>=1.1.1->neuraxle==0.3.1) (7.0)
Requirement already satisfied: Jinja2>=2.10.1 in /usr/local/lib/python3.6/dist-packages (from Flask>=1.1.1->neuraxle==0.3.1) (2.10.3)
Requirement already satisfied: itsdangerous>=0.24 in /usr/local/lib/python3.6/dist-packages (from Flask>=1.1.1->neuraxle==0.3.1) (1.1.0)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (4.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (0.2.7)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (4.0.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (1.3.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (3.0.4)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (2019.11.28)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (2.8)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from Jinja2>=2.10.1->Flask>=1.1.1->neuraxle==0.3.1) (1.1.1)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow-gpu==2.1) (3.1.0)
import urllib

def download_import(filename):
    with open(filename, "wb") as f:
        # Downloading like that is needed because of Colab operating from a Google Drive folder that is just "shared with you".
        # https://drive.google.com/drive/folders/1U0xQMxVespjQilMhYW4mDxN02IwEW67I
        url = 'https://raw.githubusercontent.com/guillaume-chevalier/seq2seq-signal-prediction/master/{}'.format(filename)
        f.write(urllib.request.urlopen(url).read())

download_import("datasets.py")
download_import("plotting.py")
download_import("steps.py")
from typing import List
from logging import warning

import tensorflow as tf
from neuraxle.data_container import DataContainer
from neuraxle.hyperparams.space import HyperparameterSamples
from neuraxle.metaopt.random import ValidationSplitWrapper
from neuraxle.metrics import MetricsWrapper
from neuraxle.pipeline import Pipeline, MiniBatchSequentialPipeline
from neuraxle.steps.data import EpochRepeater, DataShuffler
from neuraxle.steps.flow import TrainOnlyWrapper
from neuraxle.steps.loop import ForEachDataInput
from sklearn.metrics import mean_squared_error
from tensorflow_core.python.client import device_lib
from tensorflow_core.python.keras import Input, Model
from tensorflow_core.python.keras.layers import GRUCell, RNN, Dense
from tensorflow_core.python.training.adam import AdamOptimizer

from datasets import generate_data
from datasets import metric_3d_to_2d_wrapper
from neuraxle_tensorflow.tensorflow_v1 import TensorflowV1ModelStep
from neuraxle_tensorflow.tensorflow_v2 import Tensorflow2ModelStep
from plotting import plot_metrics
from steps import MeanStdNormalizer, ToNumpy, PlotPredictionsWrapper

%matplotlib inline
def choose_tf_device():
    """
    Choose a TensorFlow device (e.g.: GPU if available) to compute on.
    """
    tf.debugging.set_log_device_placement(True)
    devices = [x.name for x in device_lib.list_local_devices()]
    print('You can use the following tf devices: {}'.format(devices))
    try:
        chosen_device = [d for d in devices if 'gpu' in d.lower()][0]
    except:
        warning(
            "No GPU device found. Please make sure to do `Runtime > Change Runtime Type` and select GPU for Python 3.")
        chosen_device = devices[0]
    print('Chosen Device: {}'.format(chosen_device))
    return chosen_device

chosen_device = choose_tf_device()
You can use the following tf devices: ['/device:CPU:0', '/device:XLA_CPU:0', '/device:XLA_GPU:0', '/device:GPU:0']
Chosen Device: /device:XLA_GPU:0

Definition of the Neural Architecture

Basic Sequence To Sequence (seq2seq) RNN

Here is a basic sequence to sequence neural architecture. "ABC" is a past input. "WXYZ" is here both a future output and a future input as a feedback loop. This feedback loop has been proven to improve the results of RNNs in some cases (read more).

In our case, we won't do such a feedback loop, as it requires more complex sampling during training and testing and would be too complicated for today's practical example.

Our Stacked GRU seq2seq RNN

Here is what we do. The "H" is the hidden output of the encoder RNN's last time step. We replicate this value across time in the future as a future data input to the RNN to make it remember the context of the present at all times when predicting the future.

Notice that we could have instead plugged an attention mechanism here. Doing so would allow the neural net to re-analyze the past at every step in the future if it needed. Attention mechanisms would be more useful in contexts of Machine Translation (MT), where it's sometimes important to go see back "word per word" what was written, rather than being limited by our short term memory that was accumulated once after reading everything, for instance. More recent Machine Translation approaches like BERT (read on BERT / see example of using BERT) only uses attention mechanisms without RNNs (with some tradeoffs, however).

Creating Tensorflow 2 Model

Let's proceed and code what we see in the image just above.

def create_model(step: Tensorflow2ModelStep) -> tf.keras.Model:
    """
   Create a TensorFlow v2 sequence to sequence (seq2seq) encoder-decoder model.

   :param step: The base Neuraxle step for TensorFlow v2 (Tensorflow2ModelStep)
   :return: TensorFlow v2 Keras model
    """
    # shape: (batch_size, seq_length, input_dim)
    encoder_inputs = Input(
        shape=(None, step.hyperparams['input_dim']),
        batch_size=None,
        dtype=tf.dtypes.float32,
        name='encoder_inputs'
    )

    last_encoder_outputs, last_encoders_states = _create_encoder(step, encoder_inputs)
    decoder_outputs = _create_decoder(step, last_encoder_outputs, last_encoders_states)

    return Model(encoder_inputs, decoder_outputs)

def _create_encoder(step: Tensorflow2ModelStep, encoder_inputs: Input) -> (tf.Tensor, List[tf.Tensor]):
    """
   Create an encoder RNN using GRU Cells. GRU cells are similar to LSTM cells.

   :param step: The base Neuraxle step for TensorFlow v2 (class Tensorflow2ModelStep)
    :param encoder_inputs: encoder inputs layer of shape (batch_size, seq_length, input_dim)
    :return: (last encoder outputs, last stacked encoders states)
                last_encoder_outputs shape: (batch_size, hidden_dim)
                last_encoder_states shape: (layers_stacked_count, batch_size, hidden_dim)
    """
    encoder = RNN(cell=_create_stacked_rnn_cells(step), return_sequences=False, return_state=True)

    last_encoder_outputs_and_states = encoder(encoder_inputs)
    # last_encoder_outputs shape: (batch_size, hidden_dim)
    # last_encoder_states shape: (layers_stacked_count, batch_size, hidden_dim)

    # refer to: https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN?version=stable#output_shape_2
    last_encoder_outputs, *last_encoders_states = last_encoder_outputs_and_states
    return last_encoder_outputs, last_encoders_states

def _create_decoder(step: Tensorflow2ModelStep, last_encoder_outputs: tf.Tensor, last_encoders_states: List[tf.Tensor]) -> tf.Tensor:
    """
   Create a decoder RNN using GRU cells.

   :param step: The base Neuraxle step for TensorFlow v2 (Tensorflow2ModelStep)
    :param last_encoders_states: last encoder states tensor
    :param last_encoder_outputs: last encoder output tensor
    :return: decoder output
    """
    decoder_lstm = RNN(cell=_create_stacked_rnn_cells(step), return_sequences=True, return_state=False)

    last_encoder_output = tf.expand_dims(last_encoder_outputs, axis=1)
    # last encoder output shape: (batch_size, 1, hidden_dim)

    replicated_last_encoder_output = tf.repeat(
        input=last_encoder_output,
        repeats=step.hyperparams['window_size_future'],
        axis=1
    )
    # replicated last encoder output shape: (batch_size, window_size_future, hidden_dim)

    decoder_outputs = decoder_lstm(replicated_last_encoder_output, initial_state=last_encoders_states)
    # decoder outputs shape: (batch_size, window_size_future, hidden_dim)

    decoder_dense = Dense(step.hyperparams['output_dim'])
    # decoder outputs shape: (batch_size, window_size_future, output_dim)

    return decoder_dense(decoder_outputs)

def _create_stacked_rnn_cells(step: Tensorflow2ModelStep) -> List[GRUCell]:
    """
   Create a `layers_stacked_count` amount of GRU cells and stack them on top of each other.
   They have a `hidden_dim` number of neuron layer size.

   :param step: The base Neuraxle step for TensorFlow v2 (Tensorflow2ModelStep)
    :return: list of gru cells
    """
    cells = []
    for _ in range(step.hyperparams['layers_stacked_count']):
        cells.append(GRUCell(step.hyperparams['hidden_dim']))

    return cells

Create Loss

Using the Mean Squared Error (MSE) and weight decay (L2 penality) regularization.

def create_loss(step: Tensorflow2ModelStep, expected_outputs: tf.Tensor, predicted_outputs: tf.Tensor) -> tf.Tensor:
    """
    Create model loss.

   :param step: The base Neuraxle step for TensorFlow v2 (Tensorflow2ModelStep)
   :param expected_outputs: expected outputs of shape (batch_size, window_size_future, output_dim)
   :param predicted_outputs: expected outputs of shape (batch_size, window_size_future, output_dim)
   :return: loss (a tf Tensor that is a float)
    """
    l2 = step.hyperparams['lambda_loss_amount'] * sum(
        tf.reduce_mean(tf.nn.l2_loss(tf_var))
        for tf_var in step.model.trainable_variables
    )

    output_loss = sum(
        tf.reduce_mean(tf.nn.l2_loss(pred - expected))
        for pred, expected in zip(predicted_outputs, expected_outputs)
    ) / float(len(predicted_outputs))

    return output_loss + l2

Create Optimizer

Adam often wins.

def create_optimizer(step: TensorflowV1ModelStep) -> AdamOptimizer:
    """
   Create a TensorFlow 2 Optimizer: here the AdamOptimizer.

   :param step: The base Neuraxle step for TensorFlow v2 (Tensorflow2ModelStep)
    :return: optimizer
    """
    return AdamOptimizer(learning_rate=step.hyperparams['learning_rate'])

Generate or Load the Data

To change which exercise you are doing, change the value of the exercise_number variable (that is, the first line in the code cell below):

exercice_number = 1
print('exercice {}\n=================='.format(exercice_number))

data_inputs, expected_outputs = generate_data(
    # See: https://github.com/guillaume-chevalier/seq2seq-signal-prediction/blob/master/datasets.py
    exercice_number=exercice_number,
    n_samples=None,
    window_size_past=None,
    window_size_future=None
)

print('data_inputs shape: {} => (n_samples, window_size_past, input_dim)'.format(data_inputs.shape))
print('expected_outputs shape: {} => (n_samples, window_size_future, output_dim)'.format(expected_outputs.shape))

sequence_length = data_inputs.shape[1]
input_dim = data_inputs.shape[2]
output_dim = expected_outputs.shape[2]

batch_size = 100
epochs = 15
validation_size = 0.15
max_plotted_validation_predictions = 10
exercice 1
==================
data_inputs shape: (1000, 10, 2) => (n_samples, window_size_past, input_dim)
expected_outputs shape: (1000, 10, 2) => (n_samples, window_size_future, output_dim)

Neural Network's hyperparameters

seq2seq_pipeline_hyperparams = HyperparameterSamples({
    'hidden_dim': 12,
    'layers_stacked_count': 2,
    'lambda_loss_amount': 0.0003,
    'learning_rate': 0.001,
    'window_size_future': sequence_length,
    'output_dim': output_dim,
    'input_dim': input_dim
})

print('hyperparams: {}'.format(seq2seq_pipeline_hyperparams))
hyperparams: HyperparameterSamples([('hidden_dim', 12), ('layers_stacked_count', 2), ('lambda_loss_amount', 0.0003), ('learning_rate', 0.001), ('window_size_future', 10), ('output_dim', 2), ('input_dim', 2)])

The Pipeline

Seeing dirty Machine Learning code has almost become the industry norm. And it is for sure contributing to the reasons why 87% of data science projects never make it into production.

Here, we use advanced design patterns (pipe and filter) to do what we call clean machine learning. Those design patterns are inspired of scikit-learn's pipeline class.

Defining the Deep Learning Pipeline

Here, we first define the pipeline using a Tensorflow2ModelStep. The MeanStdNormalizer helps us normalize data, as a neural network needs to see normalized data.

feature_0_metric = metric_3d_to_2d_wrapper(mean_squared_error)
metrics = {'mse': feature_0_metric}

signal_prediction_pipeline = Pipeline([
    ForEachDataInput(MeanStdNormalizer()),
    ToNumpy(),
    PlotPredictionsWrapper(Tensorflow2ModelStep(
        # See: https://github.com/Neuraxio/Neuraxle-TensorFlow
        create_model=create_model,
        create_loss=create_loss,
        create_optimizer=create_optimizer,
        expected_outputs_dtype=tf.dtypes.float32,
        data_inputs_dtype=tf.dtypes.float32,
        print_loss=False,
        device_name=chosen_device
).set_hyperparams(seq2seq_pipeline_hyperparams))]).set_name('SignalPrediction')

Defining how to Train our Deep Learning Pipeline

Finally, let's wrap the pipeline with an EpochRepeater, ValidationSplitWrapper, DataShuffler, MiniBatchSequentialPipeline and MetricsWrapper to handle all it needs to be trained. You can refer to Neuraxle's Documentation for more info on those objects.


pipeline = Pipeline([EpochRepeater(
    ValidationSplitWrapper(
        MetricsWrapper(Pipeline([
            TrainOnlyWrapper(DataShuffler()),
            MiniBatchSequentialPipeline([
                MetricsWrapper(
                    signal_prediction_pipeline,
                    metrics=metrics,
                    name='batch_metrics'
                )], batch_size=batch_size)
            ]), 
            metrics=metrics,
            name='epoch_metrics',
            print_metrics=True
        ),
        test_size=validation_size,
        scoring_function=feature_0_metric), 
    epochs=epochs)
])
/usr/local/lib/python3.6/dist-packages/neuraxle/pipeline.py:353: UserWarning: Replacing MiniBatchSequentialPipeline[Joiner].batch_size by MiniBatchSequentialPipeline.batch_size.
  'Replacing {}[{}].batch_size by {}.batch_size.'.format(self.name, step.name, self.name))

Training of the neural net

Time to fit the model on the data.


pipeline, outputs = pipeline.fit_transform(data_inputs, expected_outputs)
Executing op RandomUniform in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Sub in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Mul in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Add in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op VarHandleOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op VarIsInitializedOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op LogicalNot in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Assert in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AssignVariableOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op RandomStandardNormal in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Qr in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op DiagPart in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Sign in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Transpose in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Reshape in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op VarHandleOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Fill in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Cast in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_365 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_370 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_375 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_380 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_385 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_390 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_395 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_400 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_405 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_410 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference_keras_scratch_graph_415 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op VarHandleOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op VarHandleOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op VarHandleOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AssignVariableOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Shape in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op StridedSlice in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Unpack in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op StridedSlice in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op ReadVariableOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Unpack in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op MatMul in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op BiasAdd in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Split in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op SplitV in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AddV2 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Sigmoid in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Tanh in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Less in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AddV2 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Pack in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_1537 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_1547 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1557 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1566 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_1580 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_1606 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1617 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_1631 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op L2Loss in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Mean in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op RealDiv in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op BroadcastGradientArgs in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Sum in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Neg in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Tile in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op StridedSliceGrad in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AddN in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op BiasAddGrad in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op MatMul in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op MatMul in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op InvertPermutation in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op TanhGrad in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AddN in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op SigmoidGrad in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op ConcatV2 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op Pack in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AddN in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_1625_1632 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_1589_1607 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_1574_1581 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_1531_1538 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AddN in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op VarHandleOp in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op ResourceApplyAdam in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1530 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1544 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1573 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1588 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1624 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_65540 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_65550 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_65567 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_65593 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __forward__defun_call_65610 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op AddN in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_65604_65611 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_65576_65594 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_65561_65568 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference___backward__defun_call_65534_65541 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_65533 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_65547 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_65560 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_65575 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_65603 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
{'mse': 0.18414642847122925}
{'mse': 0.1778781379709343}
{'mse': 0.1723181842129053}
{'mse': 0.1658200688421554}
{'mse': 0.1591329577983185}
{'mse': 0.15131258011101834}
{'mse': 0.1436201535512516}
{'mse': 0.1343595503512161}
{'mse': 0.12474072112690562}
{'mse': 0.11462532630747631}
{'mse': 0.10271182130173581}
{'mse': 0.0906442166022616}
{'mse': 0.07585859336447773}
{'mse': 0.06317439259405164}
{'mse': 0.04988300184267241}
{'mse': 0.041345448752856694}
{'mse': 0.034553488508200454}
{'mse': 0.03218617403485365}
{'mse': 0.02922688138678744}
{'mse': 0.02631547230588055}
{'mse': 0.022075968214915552}
{'mse': 0.018800000904722468}
{'mse': 0.01640079469351695}
{'mse': 0.014737265865397323}
{'mse': 0.013079363146911618}
{'mse': 0.01166897820815228}
{'mse': 0.010537850442431971}
{'mse': 0.00938083864872879}
{'mse': 0.008495135058422493}
{'mse': 0.007566329717239811}

Visualizing Test Predictions

See how your training performed.

plot_metrics(pipeline=pipeline, exercice_number=exercice_number)
last mse train: 0.008495135058422493
best mse train: 0.008495135058422493
last mse validation: 0.007566329717239811
best mse validation: 0.007566329717239811

png

def plot_predictions(data_inputs, expected_outputs, pipeline, max_plotted_predictions):
    _, _, data_inputs_validation, expected_outputs_validation = \
        pipeline.get_step_by_name('ValidationSplitWrapper').split(data_inputs, expected_outputs)

    pipeline.apply('toggle_plotting')
    pipeline.apply('set_max_plotted_predictions', max_plotted_predictions)

    signal_prediction_pipeline = pipeline.get_step_by_name('SignalPrediction')
    signal_prediction_pipeline.transform_data_container(DataContainer(
        data_inputs=data_inputs_validation,
        expected_outputs=expected_outputs_validation
    ))

plot_predictions(data_inputs, expected_outputs, pipeline, max_plotted_validation_predictions)
Executing op __inference__defun_call_1562783 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1562789 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1562798 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1562805 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0
Executing op __inference__defun_call_1562813 in device /job:localhost/replica:0/task:0/device:XLA_GPU:0

png

png

png

png

png

png

png

png

png

png

png

Conclusion

Recurrent Neural Networks are fabulous. They can learn to predict complex things. They can read multiple features from sequence data, and output variable length sequences of the same features, or of totally different features. Some people even use RNNs combined with other neural network architectures, such as CNNs, for automatic image captioning (CNN encoder for images, RNN decoder for the description).

Here is what you learned:

  • Building a time series machine learning pipeline
  • Building a TensorFlow v2 encoder decoder sequence to sequence model
  • Building a clean machine learning pipeline using Neuraxle
  • Properly split the data for training and validation
  • Shuffling the data during training
  • Using minibatches to process the data using a MiniBatchSequentialPipeline

About Us

The Author, Guillaume Chevalier:

This original project was updated and maintained with the support of our team, contributors and business partners at Neuraxio:

License & Citation

This project is free to use according to the Apache 2.0 License as long as you link to the project (citation), and that you respect the License (read the License for more details). You can cite by pointing to the following link:

Collaborate with us on similar research projects

Join the slack workspace for time series processing, where you can:

  • Collaborate with us and other researchers on writing more time series processing papers, in the #research channel;
  • Do business with us and other companies for services and products related to time series processing, in the #business channel;
  • Talk about how to do Clean Machine Learning using Neuraxle, in the #neuraxle channel;

Online Course: Learn Deep Learning and Recurrent Neural Networks (DL&RNN)

We have created a course on Deep Learning and Recurrent Neural Networks (DL&RNN). Access the course preview here. It is the most richly dense and accelerated course out there on this precise topic to make you understand RNNs and other advanced neural networks techniques quickly.

We've also created another course on how to do Clean Machine Learning with the right design patterns and the right software architecture for your code to evolve correctly to be useable in production environments. Coming soon (not online yet).


Author: guillaume-chevalier
Source code: https://github.com/guillaume-chevalier/seq2seq-signal-prediction
License: Apache-2.0 license

#tensorflow 

Modeling Sequential Iterative Neural Networks in TensorFlow
Emilie  Okumu

Emilie Okumu

1658418240

Human Activity Recognition Example using TensorFlow with LSTM

LSTMs for Human Activity Recognition

Human Activity Recognition (HAR) using smartphones dataset and an LSTM RNN. Classifying the type of movement amongst six categories:

  • WALKING,
  • WALKING_UPSTAIRS,
  • WALKING_DOWNSTAIRS,
  • SITTING,
  • STANDING,
  • LAYING.

Compared to a classical approach, using a Recurrent Neural Networks (RNN) with Long Short-Term Memory cells (LSTMs) require no or almost no feature engineering. Data can be fed directly into the neural network who acts like a black box, modeling the problem correctly. Other research on the activity recognition dataset can use a big amount of feature engineering, which is rather a signal processing approach combined with classical data science techniques. The approach here is rather very simple in terms of how much was the data preprocessed.

Let's use Google's neat Deep Learning library, TensorFlow, demonstrating the usage of an LSTM, a type of Artificial Neural Network that can process sequential data / time series.

Video dataset overview

Follow this link to see a video of the 6 activities recorded in the experiment with one of the participants:

Video of the experiment

[Watch video]

 

Details about the input data

I will be using an LSTM on the data to learn (as a cellphone attached on the waist) to recognise the type of activity that the user is doing. The dataset's description goes like this:

The sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window). The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity. The gravitational force is assumed to have only low frequency components, therefore a filter with 0.3 Hz cutoff frequency was used.

That said, I will use the almost raw data: only the gravity effect has been filtered out of the accelerometer as a preprocessing step for another 3D feature as an input to help learning. If you'd ever want to extract the gravity by yourself, you could fork my code on using a Butterworth Low-Pass Filter (LPF) in Python and edit it to have the right cutoff frequency of 0.3 Hz which is a good frequency for activity recognition from body sensors.

What is an RNN?

As explained in this article, an RNN takes many input vectors to process them and output other vectors. It can be roughly pictured like in the image below, imagining each rectangle has a vectorial depth and other special hidden quirks in the image below. In our case, the "many to one" architecture is used: we accept time series of feature vectors (one vector per time step) to convert them to a probability vector at the output for classification. Note that a "one to one" architecture would be a standard feedforward neural network.

RNN Architectures Learn more on RNNs

What is an LSTM?

An LSTM is an improved RNN. It is more complex, but easier to train, avoiding what is called the vanishing gradient problem. I recommend this course for you to learn more on LSTMs.

Learn more on LSTMs

Results

Scroll on! Nice visuals awaits.

# All Includes

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf  # Version 1.0.0 (some previous versions are used in past commits)
from sklearn import metrics

import os
# Useful Constants

# Those are separate normalised input features for the neural network
INPUT_SIGNAL_TYPES = [
    "body_acc_x_",
    "body_acc_y_",
    "body_acc_z_",
    "body_gyro_x_",
    "body_gyro_y_",
    "body_gyro_z_",
    "total_acc_x_",
    "total_acc_y_",
    "total_acc_z_"
]

# Output classes to learn how to classify
LABELS = [
    "WALKING",
    "WALKING_UPSTAIRS",
    "WALKING_DOWNSTAIRS",
    "SITTING",
    "STANDING",
    "LAYING"
]

Let's start by downloading the data:

# Note: Linux bash commands start with a "!" inside those "ipython notebook" cells

DATA_PATH = "data/"

!pwd && ls
os.chdir(DATA_PATH)
!pwd && ls

!python download_dataset.py

!pwd && ls
os.chdir("..")
!pwd && ls

DATASET_PATH = DATA_PATH + "UCI HAR Dataset/"
print("\n" + "Dataset is now located at: " + DATASET_PATH)
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition
data     LSTM_files  LSTM_OLD.ipynb  README.md
LICENSE  LSTM.ipynb  lstm.py         screenlog.0
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data
download_dataset.py  source.txt

Downloading...
--2017-05-24 01:49:53--  https://archive.ics.uci.edu/ml/machine-learning-databases/00240/UCI%20HAR%20Dataset.zip
Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.249
Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.249|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 60999314 (58M) [application/zip]
Saving to: ‘UCI HAR Dataset.zip’

100%[======================================>] 60,999,314  1.69MB/s   in 38s    

2017-05-24 01:50:31 (1.55 MB/s) - ‘UCI HAR Dataset.zip’ saved [60999314/60999314]

Downloading done.

Extracting...
Extracting successfully done to /home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data/UCI HAR Dataset.
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data
download_dataset.py  __MACOSX  source.txt  UCI HAR Dataset  UCI HAR Dataset.zip
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition
data     LSTM_files  LSTM_OLD.ipynb  README.md
LICENSE  LSTM.ipynb  lstm.py         screenlog.0

Dataset is now located at: data/UCI HAR Dataset/

Preparing dataset:

TRAIN = "train/"
TEST = "test/"


# Load "X" (the neural network's training and testing inputs)

def load_X(X_signals_paths):
    X_signals = []

    for signal_type_path in X_signals_paths:
        file = open(signal_type_path, 'r')
        # Read dataset from disk, dealing with text files' syntax
        X_signals.append(
            [np.array(serie, dtype=np.float32) for serie in [
                row.replace('  ', ' ').strip().split(' ') for row in file
            ]]
        )
        file.close()

    return np.transpose(np.array(X_signals), (1, 2, 0))

X_train_signals_paths = [
    DATASET_PATH + TRAIN + "Inertial Signals/" + signal + "train.txt" for signal in INPUT_SIGNAL_TYPES
]
X_test_signals_paths = [
    DATASET_PATH + TEST + "Inertial Signals/" + signal + "test.txt" for signal in INPUT_SIGNAL_TYPES
]

X_train = load_X(X_train_signals_paths)
X_test = load_X(X_test_signals_paths)


# Load "y" (the neural network's training and testing outputs)

def load_y(y_path):
    file = open(y_path, 'r')
    # Read dataset from disk, dealing with text file's syntax
    y_ = np.array(
        [elem for elem in [
            row.replace('  ', ' ').strip().split(' ') for row in file
        ]],
        dtype=np.int32
    )
    file.close()

    # Substract 1 to each output class for friendly 0-based indexing
    return y_ - 1

y_train_path = DATASET_PATH + TRAIN + "y_train.txt"
y_test_path = DATASET_PATH + TEST + "y_test.txt"

y_train = load_y(y_train_path)
y_test = load_y(y_test_path)

Additionnal Parameters:

Here are some core parameter definitions for the training.

For example, the whole neural network's structure could be summarised by enumerating those parameters and the fact that two LSTM are used one on top of another (stacked) output-to-input as hidden layers through time steps.

# Input Data

training_data_count = len(X_train)  # 7352 training series (with 50% overlap between each serie)
test_data_count = len(X_test)  # 2947 testing series
n_steps = len(X_train[0])  # 128 timesteps per series
n_input = len(X_train[0][0])  # 9 input parameters per timestep


# LSTM Neural Network's internal structure

n_hidden = 32 # Hidden layer num of features
n_classes = 6 # Total classes (should go up, or should go down)


# Training

learning_rate = 0.0025
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300  # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000  # To show test set accuracy during training


# Some debugging info

print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
Some useful info to get an insight on dataset's shape and normalisation:
(X shape, y shape, every X's mean, every X's standard deviation)
(2947, 128, 9) (2947, 1) 0.0991399 0.395671
The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.

Utility functions for training:

def LSTM_RNN(_X, _weights, _biases):
    # Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
    # Moreover, two LSTM cells are stacked which adds deepness to the neural network.
    # Note, some code of this notebook is inspired from an slightly different
    # RNN architecture used on another dataset, some of the credits goes to
    # "aymericdamien" under the MIT license.

    # (NOTE: This step could be greatly optimised by shaping the dataset once
    # input shape: (batch_size, n_steps, n_input)
    _X = tf.transpose(_X, [1, 0, 2])  # permute n_steps and batch_size
    # Reshape to prepare input to hidden activation
    _X = tf.reshape(_X, [-1, n_input])
    # new shape: (n_steps*batch_size, n_input)

    # ReLU activation, thanks to Yu Zhao for adding this improvement here:
    _X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])
    # Split data because rnn cell needs a list of inputs for the RNN inner loop
    _X = tf.split(_X, n_steps, 0)
    # new shape: n_steps * (batch_size, n_hidden)

    # Define two stacked LSTM cells (two recurrent layers deep) with tensorflow
    lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
    lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
    lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
    # Get LSTM cell output
    outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)

    # Get last time step's output feature for a "many-to-one" style classifier,
    # as in the image describing RNNs at the top of this page
    lstm_last_output = outputs[-1]

    # Linear activation
    return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']


def extract_batch_size(_train, step, batch_size):
    # Function to fetch a "batch_size" amount of data from "(X|y)_train" data.

    shape = list(_train.shape)
    shape[0] = batch_size
    batch_s = np.empty(shape)

    for i in range(batch_size):
        # Loop index
        index = ((step-1)*batch_size + i) % len(_train)
        batch_s[i] = _train[index]

    return batch_s


def one_hot(y_, n_classes=n_classes):
    # Function to encode neural one-hot output labels from number indexes
    # e.g.:
    # one_hot(y_=[[5], [0], [3]], n_classes=6):
    #     return [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]

    y_ = y_.reshape(len(y_))
    return np.eye(n_classes)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

Let's get serious and build the neural network:


# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Graph weights
weights = {
    'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
    'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
    'hidden': tf.Variable(tf.random_normal([n_hidden])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = LSTM_RNN(x, weights, biases)

# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
    tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Hooray, now train the neural network:

# To keep track of training's performance
test_losses = []
test_accuracies = []
train_losses = []
train_accuracies = []

# Launch the graph
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
init = tf.global_variables_initializer()
sess.run(init)

# Perform Training steps with "batch_size" amount of example data at each loop
step = 1
while step * batch_size <= training_iters:
    batch_xs =         extract_batch_size(X_train, step, batch_size)
    batch_ys = one_hot(extract_batch_size(y_train, step, batch_size))

    # Fit training using batch data
    _, loss, acc = sess.run(
        [optimizer, cost, accuracy],
        feed_dict={
            x: batch_xs,
            y: batch_ys
        }
    )
    train_losses.append(loss)
    train_accuracies.append(acc)

    # Evaluate network only at some steps for faster training:
    if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters):

        # To not spam console, show training accuracy/loss in this "if"
        print("Training iter #" + str(step*batch_size) + \
              ":   Batch Loss = " + "{:.6f}".format(loss) + \
              ", Accuracy = {}".format(acc))

        # Evaluation on the test set (no learning made here - just evaluation for diagnosis)
        loss, acc = sess.run(
            [cost, accuracy],
            feed_dict={
                x: X_test,
                y: one_hot(y_test)
            }
        )
        test_losses.append(loss)
        test_accuracies.append(acc)
        print("PERFORMANCE ON TEST SET: " + \
              "Batch Loss = {}".format(loss) + \
              ", Accuracy = {}".format(acc))

    step += 1

print("Optimization Finished!")

# Accuracy for test data

one_hot_predictions, accuracy, final_loss = sess.run(
    [pred, accuracy, cost],
    feed_dict={
        x: X_test,
        y: one_hot(y_test)
    }
)

test_losses.append(final_loss)
test_accuracies.append(accuracy)

print("FINAL RESULT: " + \
      "Batch Loss = {}".format(final_loss) + \
      ", Accuracy = {}".format(accuracy))
WARNING:tensorflow:From <ipython-input-19-3339689e51f6>:9: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Training iter #1500:   Batch Loss = 5.416760, Accuracy = 0.15266665816307068
PERFORMANCE ON TEST SET: Batch Loss = 4.880829811096191, Accuracy = 0.05632847175002098
Training iter #30000:   Batch Loss = 3.031930, Accuracy = 0.607333242893219
PERFORMANCE ON TEST SET: Batch Loss = 3.0515167713165283, Accuracy = 0.6067186594009399
Training iter #60000:   Batch Loss = 2.672764, Accuracy = 0.7386666536331177
PERFORMANCE ON TEST SET: Batch Loss = 2.780435085296631, Accuracy = 0.7027485370635986
Training iter #90000:   Batch Loss = 2.378301, Accuracy = 0.8366667032241821
PERFORMANCE ON TEST SET: Batch Loss = 2.6019773483276367, Accuracy = 0.7617915868759155
Training iter #120000:   Batch Loss = 2.127290, Accuracy = 0.9066667556762695
PERFORMANCE ON TEST SET: Batch Loss = 2.3625404834747314, Accuracy = 0.8116728663444519
Training iter #150000:   Batch Loss = 1.929805, Accuracy = 0.9380000233650208
PERFORMANCE ON TEST SET: Batch Loss = 2.306251049041748, Accuracy = 0.8276212215423584
Training iter #180000:   Batch Loss = 1.971904, Accuracy = 0.9153333902359009
PERFORMANCE ON TEST SET: Batch Loss = 2.0835530757904053, Accuracy = 0.8771631121635437
Training iter #210000:   Batch Loss = 1.860249, Accuracy = 0.8613333702087402
PERFORMANCE ON TEST SET: Batch Loss = 1.9994492530822754, Accuracy = 0.8788597583770752
Training iter #240000:   Batch Loss = 1.626292, Accuracy = 0.9380000233650208
PERFORMANCE ON TEST SET: Batch Loss = 1.879166603088379, Accuracy = 0.8944689035415649
Training iter #270000:   Batch Loss = 1.582758, Accuracy = 0.9386667013168335
PERFORMANCE ON TEST SET: Batch Loss = 2.0341007709503174, Accuracy = 0.8361043930053711
Training iter #300000:   Batch Loss = 1.620352, Accuracy = 0.9306666851043701
PERFORMANCE ON TEST SET: Batch Loss = 1.8185184001922607, Accuracy = 0.8639293313026428
Training iter #330000:   Batch Loss = 1.474394, Accuracy = 0.9693333506584167
PERFORMANCE ON TEST SET: Batch Loss = 1.7638503313064575, Accuracy = 0.8747878670692444
Training iter #360000:   Batch Loss = 1.406998, Accuracy = 0.9420000314712524
PERFORMANCE ON TEST SET: Batch Loss = 1.5946787595748901, Accuracy = 0.902273416519165
Training iter #390000:   Batch Loss = 1.362515, Accuracy = 0.940000057220459
PERFORMANCE ON TEST SET: Batch Loss = 1.5285792350769043, Accuracy = 0.9046487212181091
Training iter #420000:   Batch Loss = 1.252860, Accuracy = 0.9566667079925537
PERFORMANCE ON TEST SET: Batch Loss = 1.4635565280914307, Accuracy = 0.9107565879821777
Training iter #450000:   Batch Loss = 1.190078, Accuracy = 0.9553333520889282
...
PERFORMANCE ON TEST SET: Batch Loss = 0.42567864060401917, Accuracy = 0.9324736595153809
Training iter #2070000:   Batch Loss = 0.342763, Accuracy = 0.9326667189598083
PERFORMANCE ON TEST SET: Batch Loss = 0.4292983412742615, Accuracy = 0.9273836612701416
Training iter #2100000:   Batch Loss = 0.259442, Accuracy = 0.9873334169387817
PERFORMANCE ON TEST SET: Batch Loss = 0.44131210446357727, Accuracy = 0.9273836612701416
Training iter #2130000:   Batch Loss = 0.284630, Accuracy = 0.9593333601951599
PERFORMANCE ON TEST SET: Batch Loss = 0.46982717514038086, Accuracy = 0.9093992710113525
Training iter #2160000:   Batch Loss = 0.299012, Accuracy = 0.9686667323112488
PERFORMANCE ON TEST SET: Batch Loss = 0.48389002680778503, Accuracy = 0.9138105511665344
Training iter #2190000:   Batch Loss = 0.287106, Accuracy = 0.9700000286102295
PERFORMANCE ON TEST SET: Batch Loss = 0.4670214056968689, Accuracy = 0.9216151237487793
Optimization Finished!
FINAL RESULT: Batch Loss = 0.45611169934272766, Accuracy = 0.9165252447128296

Training is good, but having visual insight is even better:

Okay, let's plot this simply in the notebook for now.

# (Inline plots: )
%matplotlib inline

font = {
    'family' : 'Bitstream Vera Sans',
    'weight' : 'bold',
    'size'   : 18
}
matplotlib.rc('font', **font)

width = 12
height = 12
plt.figure(figsize=(width, height))

indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size))
plt.plot(indep_train_axis, np.array(train_losses),     "b--", label="Train losses")
plt.plot(indep_train_axis, np.array(train_accuracies), "g--", label="Train accuracies")

indep_test_axis = np.append(
    np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1]),
    [training_iters]
)
plt.plot(indep_test_axis, np.array(test_losses),     "b-", label="Test losses")
plt.plot(indep_test_axis, np.array(test_accuracies), "g-", label="Test accuracies")

plt.title("Training session's progress over iterations")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Training Progress (Loss or Accuracy values)')
plt.xlabel('Training iteration')

plt.show()

LSTM Training Testing Comparison Curve

And finally, the multi-class confusion matrix and metrics!

# Results

predictions = one_hot_predictions.argmax(1)

print("Testing Accuracy: {}%".format(100*accuracy))

print("")
print("Precision: {}%".format(100*metrics.precision_score(y_test, predictions, average="weighted")))
print("Recall: {}%".format(100*metrics.recall_score(y_test, predictions, average="weighted")))
print("f1_score: {}%".format(100*metrics.f1_score(y_test, predictions, average="weighted")))

print("")
print("Confusion Matrix:")
confusion_matrix = metrics.confusion_matrix(y_test, predictions)
print(confusion_matrix)
normalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100

print("")
print("Confusion matrix (normalised to % of total test data):")
print(normalised_confusion_matrix)
print("Note: training and testing data is not equally distributed amongst classes, ")
print("so it is normal that more than a 6th of the data is correctly classifier in the last category.")

# Plot Results:
width = 12
height = 12
plt.figure(figsize=(width, height))
plt.imshow(
    normalised_confusion_matrix,
    interpolation='nearest',
    cmap=plt.cm.rainbow
)
plt.title("Confusion matrix \n(normalised to % of total test data)")
plt.colorbar()
tick_marks = np.arange(n_classes)
plt.xticks(tick_marks, LABELS, rotation=90)
plt.yticks(tick_marks, LABELS)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Testing Accuracy: 91.65252447128296%

Precision: 91.76286479743305%
Recall: 91.65252799457076%
f1_score: 91.6437546304815%

Confusion Matrix:
[[466   2  26   0   2   0]
 [  5 441  25   0   0   0]
 [  1   0 419   0   0   0]
 [  1   1   0 396  87   6]
 [  2   1   0  87 442   0]
 [  0   0   0   0   0 537]]

Confusion matrix (normalised to % of total test data):
[[ 15.81269073   0.06786563   0.88225317   0.           0.06786563   0.        ]
 [  0.16966406  14.96437073   0.84832031   0.           0.           0.        ]
 [  0.03393281   0.          14.21784878   0.           0.           0.        ]
 [  0.03393281   0.03393281   0.          13.43739319   2.95215464
    0.20359688]
 [  0.06786563   0.03393281   0.           2.95215464  14.99830341   0.        ]
 [  0.           0.           0.           0.           0.          18.22192001]]
Note: training and testing data is not equally distributed amongst classes,
so it is normal that more than a 6th of the data is correctly classifier in the last category.

Confusion Matrix

sess.close()

Conclusion

Outstandingly, the final accuracy is of 91%! And it can peak to values such as 93.25%, at some moments of luck during the training, depending on how the neural network's weights got initialized at the start of the training, randomly.

This means that the neural networks is almost always able to correctly identify the movement type! Remember, the phone is attached on the waist and each series to classify has just a 128 sample window of two internal sensors (a.k.a. 2.56 seconds at 50 FPS), so it amazes me how those predictions are extremely accurate given this small window of context and raw data. I've validated and re-validated that there is no important bug, and the community used and tried this code a lot. (Note: be sure to report something in the issue tab if you find bugs, otherwise Quora, StackOverflow, and other StackExchange sites are the places for asking questions.)

I specially did not expect such good results for guessing between the labels "SITTING" and "STANDING". Those are seemingly almost the same thing from the point of view of a device placed at waist level according to how the dataset was originally gathered. Thought, it is still possible to see a little cluster on the matrix between those classes, which drifts away just a bit from the identity. This is great.

It is also possible to see that there was a slight difficulty in doing the difference between "WALKING", "WALKING_UPSTAIRS" and "WALKING_DOWNSTAIRS". Obviously, those activities are quite similar in terms of movements.

I also tried my code without the gyroscope, using only the 3D accelerometer's 6 features (and not changing the training hyperparameters), and got an accuracy of 87%. In general, gyroscopes consumes more power than accelerometers, so it is preferable to turn them off.

Improvements

In another open-source repository of mine, the accuracy is pushed up to nearly 94% using a special deep LSTM architecture which combines the concepts of bidirectional RNNs, residual connections, and stacked cells. This architecture is also tested on another similar activity dataset. It resembles the nice architecture used in "Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", without an attention mechanism, and with just the encoder part - as a "many to one" architecture instead of a "many to many" to be adapted to the Human Activity Recognition (HAR) problem. I also worked more on the problem and came up with the LARNN, however it's complicated for just a little gain. Thus the current, original activity recognition project is simply better to use for its simplicity. We've also coded a non-deep learning machine learning pipeline on the same datasets using classical featurization techniques and older machine learning algorithms.

If you want to learn more about deep learning, I have also built a list of the learning ressources for deep learning which have revealed to be the most useful to me here. You may also be interested in my online course on Deep Learning and Recurrent Neural Networks (DL&RNN).

I also have made even more improvements as seen just below with the few lines of code for easier usage and for reaching an even better score. Note this this is still an ongoing project, subscribe here to learn more.

More time series processing

Visit Neuraxio's Time Series Solution product page for more information.

References

The dataset can be found on the UCI Machine Learning Repository:

Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. A Public Domain Dataset for Human Activity Recognition Using Smartphones. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.

Citation

Copyright (c) 2016 Guillaume Chevalier. To cite my code, you can point to the URL of the GitHub repository, for example:

Guillaume Chevalier, LSTMs for Human Activity Recognition, 2016, https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

My code is available for free and even for private usage for anyone under the MIT License, however I ask to cite for using the code.

Here is the BibTeX citation code:

@misc{chevalier2016lstms,
  title={LSTMs for human activity recognition},
  author={Chevalier, Guillaume},
  year={2016}
}

I've also published a second paper, with contributors, regarding a second iteration as an improvement of this work, with deeper neural networks. The paper is available on arXiv. Here is the BibTeX citation code for this newer piece of work based on this project:

@article{DBLP:journals/corr/abs-1708-08989,
  author    = {Yu Zhao and
               Rennong Yang and
               Guillaume Chevalier and
               Maoguo Gong},
  title     = {Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable
               Sensors},
  journal   = {CoRR},
  volume    = {abs/1708.08989},
  year      = {2017},
  url       = {http://arxiv.org/abs/1708.08989},
  archivePrefix = {arXiv},
  eprint    = {1708.08989},
  timestamp = {Mon, 13 Aug 2018 16:46:48 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1708-08989},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Extra links

Connect with me

Liked this project? Did it help you? Leave a star, fork and share the love!

This activity recognition project has been seen in:

Collaborate with us on similar projects!

Join our slack workspace for time series processing, where you can:

  • Collaborate with like-minded researchers in the #research channel;
  • Do business with us and other companies for services and products related to time series processing, in the #business channel;
  • Talk about how to do Clean Machine Learning using Neuraxle, in the #neuraxle channel;

Online Course: Learn Deep Learning and Recurrent Neural Networks (DL&RNN)

I have created a course on Deep Learning and Recurrent Neural Networks (DL&RNN). Watch a preview of the Deep Learning and Recurrent Neural Networks (DL&RNN) course here. It is the most richly dense and accelerated course out there on this precise topic to make you understand RNNs and other advanced neural networks techniques quickly.


# Let's convert this notebook to a README automatically for the GitHub project's title page:
!jupyter nbconvert --to markdown LSTM.ipynb
!mv LSTM.md README.md
[NbConvertApp] Converting notebook LSTM.ipynb to markdown
[NbConvertApp] Support files will be in LSTM_files/
[NbConvertApp] Making directory LSTM_files
[NbConvertApp] Making directory LSTM_files
[NbConvertApp] Writing 38654 bytes to LSTM.md

Author: guillaume-chevalier
Source code:  https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition
License: MIT license

#tensorflow #jupyter 

Human Activity Recognition Example using TensorFlow with LSTM
Jonas  Wald

Jonas Wald

1658226145

How to choose the best Poloniex clone script?

In recent times, the Poloniex clone script has acquired an irreplaceable position in most entrepreneurs’ minds. With a strong will, many entrepreneurs have started to set their footprints in this #crypto market by launching their own crypto exchange business after considering its various benefits and its futuristic scope. Even though getting started with this leading business model, only a few exist as renowned ones. The rest of them are mostly misled about their success path, either with inadequate knowledge or faulty guidance and there are some other factors too. We shall not be the one among them.

Initiating your crypto exchange #business with the help of the Poloniex clone script makes your move one step ahead of your competitors. The benefits it offers to the start-ups who choose this method are immeasurable. But to gain these benefits you must be well aware of the product you’re about to attain. There are plenty of Poloniex clone scripts readily available in the current crypto market, but is it fair to expect superior quality from every other script? Probably we can’t. Instead, get a brief knowledge of the features of a Poloniex clone script which might help you later.

Impeccable features of the Poloniex clone script:

  • High-Performance Matching Engine
  • Spot Trading
  • Margin Trading
  • Futures Trading
  • P2P Trading
  • OTC Trading
  • User Dashboard
  • Admin Dashboard
  • Extended Trade View
  • KYC/AML
  • Referral program
  • Crypto/Fiat Payment Gateway integration


Security Features:

  • Jail Login
  • Two-Factor Authentication
  • Cloudflare Integration
  • SQL Injection Prevention
  • End-To-End Encryption Based SSL
  • Anti Denial Of Service(Dos)
  • Cross-Site Request Forgery Protection
  • Server-Side Request Forgery Protection
  • Anti Distributed Denial Of Service

These are some of the basic features a Poloniex clone script must possess. There are extra additional features that can be incorporated with your script based on your business needs. As previously said to be out of the herd, the only thing you are about to do is to acquire the best Poloniex clone script in the entire crypto space. This will be the only factor that induces your entire growth from the beginning. To enjoy such enriched service the most important factor you’ll be required to consider is availing the service from a reputed provider. As they will be the ones to provide you the service with zero error. Also, enhances your overall business model’s growth from the very base level.

Filtering out the reliable crypto exchange software provider is a bit of a boring task that requires lots and lots of analysis & research which consumes most of your time in this phase itself. To make it more interesting for you, I’ve done the entire research and the end result was highly surprising.

I ended up with #coinsclone as the best-in-class crypto exchange clone script/software provider with significant years of experience in sculpting various crypto exchange projects. Why them?? That might be your question.  Some of their past projects and their portfolio made me choose them over others. Instead of hearing it from me, have a glance at it now >>>> COINSCLONE

Get in touch with their team of experts and book your free demo.

Whatsapp: +91 9500575285

Skype: live:hello_20214

Mail Id: hello@coinsclone.com  

For an instant live demo >>>> Poloniex clone script
 

Alan  grace

Alan grace

1657868521

Build Own E-learning platform Smoothly With Udemy clone script.

Starting your own E-learning platform is now easy. Udemy clone script from Trioangle includes the metrics to make all your E-learning services speedy and efficient. The features included in our Udemy clone app ensure the smoothness of services. 

Launch E-learning Platform smoothly with an all-in-one Udemy clone platform from us.

WhatsApp: +91 6379630152

E-mail:  sales@trioangle.com

Website:  https://www.trioangle.com/udemy-clone/

 #UdemyCloneScript  #UdemyClone #UdemycloneApp #ApplikeUdemy #business #businessideas 


 

Build Own E-learning platform Smoothly With Udemy clone script.
Jonas  Wald

Jonas Wald

1657794074

Bithumb clone script ~ A complete solution for entrepreneurs?

In this fast-paced world, it is a bitter truth that every individual has to outsmart others to make sure they survive in this world. This applies to every industry. Speaking of the #business platform, most #entrepreneurs look for a high potential business model, to attain a highly recognizable position among others. The business model should also be featured in the current trend. That’s how the #crypto exchange business sought most of its attention among various business models. Wondering how it jumped off to the top position? Here’s your answer being hidden >>>> Crypto exchange business

After figuring it out, the majority of the entrepreneurs were looking forward to initiating their crypto exchange similar to Bithumb. 

As there are various ways to start a crypto exchange like Bithumb, their main choice of development is the Bithumb clone script. As previously said, this is a fast-paced world, enthusiastic entrepreneurs will look for a simpler way to establish such a crypto exchange. That’s why they opted for the Bithumb clone script.

Bithumb clone script is the simplest way for initiating a crypto exchange similar to Bithumb as this crypto exchange software is equipped with all the fascinating features of the Bithumb exchange. This is why the entire development process of establishing this crypto exchange is made easy.

Peculiar features of Bithumb clone script:

  • 100% Customizable
  • Multiple devices support
  • Highly secure and bug-free
  • Advanced Trading Engine
  • Multiple Payment methods
  • Multi Crypto Wallet
  • Advanced UI/UX Design
  • Multi-lingual Support
  • Order Book
  • Mobile Trading App For Android & IOS
  • Trade History
  • KYC submit

Security features of Bithumb clone script:

  • Two Factor Authentication
  • Verification based communication through SMS/Email
  • End-to-End Encryption based SSL
  • Cross-Site forgery protection (CSRF)
  • Distributed Denial of Service (DDoS)
  • Server Side Forgery protection (SSRF)

After making clear the ways one by one, there is a final crucial step that has to be taken care of. It’s none other than the selection of the right crypto exchange software provider in the current crypto space. As you know a particular service can be claimed in an enriched manner only with the help of a genuine provider. That’s why it should be considered the most important step in the crypto exchange business. Failing to choose such a genuine provider makes the entire business collapse in a matter of days due to the poor quality of the service.

Why be a part of a disaster when having a better solution in hand. Yes! I do have the solution you search for. After completing research for my use case, I ended up with Coinsclone. Being a supreme crypto exchange software provider, they have been positioned at the top among several rivals in the market. Know what made me end up with Coinsclone.

After getting to know them, it wouldn’t be fair to continue with the same set of words. Instead, get in touch with their team of experts who will be guiding you with the entire development phase and clear all of your queries in a snap of time.

Whatsapp: +91 9500575285

Skype: live:hello_20214

Mail Id: hello@coinsclone.com

Have a great experience with their work and proceed with your business process effectively. 

For instant live demo >>>>> Bithumb clone script
 

Akshara Singh

Akshara Singh

1657195593

Best ever way of initiating a crypto exchange business

Considering the cryptocurrencies’ rapid growth and adoption, there is a massive demand for #crypto exchanges among crypto investors. Following this, many #entrepreneurs started to enter this market by launching their #crypto exchanges. As you know, the majority of them fail to succeed in a business model as there are several factors to be looked upon keenly. The foremost factor is the budget allotment for a business. Some tend to believe that high bid prices are filled with superior quality which will make the #business successful. But, it is a myth. Everything has a predefined value and it serves as meant to be.

Speaking of the budget for crypto exchange development, there is a certain range that has been followed in the current crypto market. Always know that the budget completely relies on the development method you’re about to choose for your business.

Generally, a crypto exchange business can be initiated in the following ways,

  • Starting it from the scratch
  • Deploying it with the help of a cryptocurrency exchange script

Instead of taking much of your valuable time, I shall dive into the important aspects of these development strategies,

Starting a crypto exchange from scratch is gonna suck all of your energy, as this is quite a complicated one. Mainly you would be requiring a lot of resources in terms of Money, Knowledge, Time, etc… Speaking frankly this method of developing a crypto exchange is not suitable for the majority of entrepreneurs. It would take a solid duration of 10 months to deploy your crypto exchange. In addition to that, this development method expense would cost around $80K~$100K. Woah, this would be a nightmare for many.

On the other side, making use of the crypto exchange script is similar to taking the escalator instead of stairs for reaching the 10th floor of a building. The overall cost for launching a crypto exchange is reduced, with an affordable price range of $5K~$15K. Also, you will be able to deploy your fully functioning crypto exchange within a week. This development method doesn’t require much technical knowledge.

I’m sure that you might have got clarity on the budget and the effective way to initiate your crypto exchange business. Even after seeing this, some will be hesitant to make a strong decision, that’s usual. Why won’t you have a look at the live experience of an individual who made use of the Cryptocurrency exchange script. Also, I hereby advise you to follow a set of instructions that have been provided by an expert on How to start a crypto exchange, which would probably drive you to success shortly.
 

Akshara Singh

Akshara Singh

1656581871

How to initiate a crypto exchange like Luno in a cost-effective way?

It’s not a new topic to introduce cryptocurrencies’ growth and its scope in the future, as this has been the hot topic in the town currently. Considering the cryptocurrencies’ massive growth, there raised a huge demand for crypto exchanges among crypto investors and traders. Followed by many entrepreneurs who started making their impressions in this crypto sector by launching their #crypto exchanges. But, the majority of business people who are starting an exchange #business don’t succeed with it as there are so many factors influencing it. The foremost factor is the budget for your business. Some tend to fall for highly bid prices believing that the quality will be superfine. It’s a myth. Everything has a predefined value and it serves as meant to be.

While speaking of the budget for #cryptoexchange development like Luno there is a certain range that has been followed in the current crypto market. Before deciding on a budget for the crypto exchange development, the method you’re about to choose is gotta play a vital role. Let’s see them in detail,

Generally, a crypto exchange business can be initiated in the following ways,

  • Build from scratch
  • Making use of the Luno clone script

I shall not take much of your time explaining it in-depth, instead, I shall point out the vital attributes of these two methods.

Starting a crypto exchange like Luno from scratch is gonna drain all of your energy, as this method is a bit complicated. You would be requiring lumpsum resources in terms of Money, Knowledge, Time, etc… To be frank it doesn’t fit the majority of budding entrepreneurs’ strategies. It would take a solid duration of 10 months to deploy your crypto exchange. In addition to that, this development method expense would cost around $80K~$100K. This might be a huge one for the majority of them.

On the other side, the Luno clone script makes you overcome all the hurdles you will be facing with the other development methodologies. Whereas, the overall cost for launching a crypto exchange is reduced, costing an affordable price range of $5K~$12K. Also, you will be able to deploy your fully functioning crypto exchange within a week. This development method doesn’t require much technical knowledge.

Here’s an extra tip. Before fixing the budget, ensure to have an extensive idea of cutting off your unwanted expense on this crypto exchange development cost. And proceed with crafting your dream business in an affordable way.

I’m sure that you might have got clarity on the budget and the effective way to initiate your crypto exchange business. After acquiring sufficient knowledge, you can go with the most effective development method ~ the Luno clone script.

Coming to the point, there is huge competition among various crypto exchange software providers to offer this Luno clone script. Sorting out the genuine ones among the massive lot is quite complicated. Probably it might take weeks just to find a suitable one. Surprisingly, I’ve unloaded all of your burdens by making that research for my own purpose. The result was absolutely shocking. #coinsclone seems to be a perfect fit among others. They have been serving in this industry with their expertise in blockchain tech and have delivered hundreds of projects that turned many business people’s life. Have a look at their individuality >>>> COINSCLONE
 

Akshara Singh

Akshara Singh

1655895208

In what way does the Localbitcoins clone script generate revenue?

One of the most popular crypto exchanges is Localbitcoins which comes under a P2P-based crypto exchange type. Being an entrepreneur, one can make their dream crypto exchange business a practical one with the help of this Localbitcoins clone script. As you might be aware of the crypto exchange’s various benefits and also the benefits of integrating it with the help of the #localbitcoin clone script. In spite of various unmatched benefits being offered, some #entrepreneurs remain unsolved regarding the revenue patterns. Let’s have a closer look at it,


As it is clear that the Localbitcoins clone script enables you to launch a well-defined P2P crypto exchange with ease. Once after establishing a P2P #crypto exchange, you’ll be all set for beginning your business. After making it through the initial phase and your marketing phase, more users will be starting to flow to your #p2p crypto exchange. That’s where your journey begins. Here are some of the revenue-generating methods,

Revenue generation streams

Fiat deposit/Crypto withdrawal fee

Well, after attaining a reasonable amount of users for your crypto exchange, they’ll be starting to engage themselves with the trading where they’ll be required to deposit some amount of fiat #money in their account. A minimal percentage can be charged for them by the admin of the exchange (you). Once after getting done with their transactions, some may wish to withdraw the bought cryptocurrencies. A certain percentage can also be claimed for this action.

Advertisement fee

You know that P2P crypto exchange is meant for making transactions with high privacy, so the users will be required to post their requirements in the form of ads. For that, a negligible percentage can be claimed from the admin side.


Listing fee

This fast-growing world and technology haven’t settled for any average moves. As it multiplies rapidly, there emerge new cryptocurrencies every day and some of them might prefer to list them on your exchange. In turn, you could offer them a price as a fee for listing their new cryptocurrencies.  


As mentioned above, these are some sort of the ways where this Localbitcoins clone script acts as a pillar for generating revenue for your business. After recognizing its real potential and efficiency all you have to do is to choose the best Localbitcoins clone script. Which enlightens your crypto exchange #business further and takes it to the next level. Obviously, a featured service can be availed by reaching a professional provider. That’s how to experience the above-mentioned revenue one should attain the Localbitcoins clone script from a standard crypto exchange software provider.

Coming to the point, as there is a huge competition among various crypto exchange software providers to offer this Localbitcoins clone script, sorting out the genuine ones among them is quite complicated. Probably it might take weeks just to find a suitable one. Surprisingly, I’ve unloaded all of your burdens by making that research for my own purpose. The result was absolutely shocking. Coinsclone seems to be a perfect fit among others. They have been serving in this industry with their expertise in blockchain tech and have delivered hundreds of projects that turned many business people’s life. Have a look at their individuality >>>> COINSCLONE
 

How Does UI/UX Testing Services Improve Business?

Want to implement a well-defined UI/UX strategy that increases conversational rates? Check out this write-up to discover how UI/UX testing services improve the #business!

#UsabilityTesting #UserServices #UserExperienceTesting #UIUX #UXTesting #UITesting #UIUXTesting #UIUXTestingServices #9YardsTechnology

https://9yardstechnology.blogspot.com/2022/04/how-does-uiux-testing-services-improve-business.html

 How Does UI/UX Testing Services Improve Business?
Akshara Singh

Akshara Singh

1655374843

How to launch a stunning crypto exchange like Bithumb in a week?

While speaking of the development of a #crypto exchange like Bithumb, the most effective way of developing it was the crucial one for the majority of entrepreneurs. One particular development method blew up their mind, as it eliminated most of the complex steps that made their journey quite rough. Also, the method they preferred was highly beneficial to them in an unparalleled way. With various thoughts running inside the mind, Let’s dive straight into it >>> Bithumb clone script

Bithumb clone script has been considered the most effective method to launch a crypto exchange similar to Bithumb by many entrepreneurs. It is ready to launch crypto exchange software filled with enriched features that an existing Bithumb exchange has. With all those features infused in it, the result was quite astonishing. Apart from the existing features, more additional features of your #business kind can be impregnated with this script. Bithumb clone script breaks one of the core barriers, the time being spent on the development process, allowing the entrepreneurs to launch a superfine crypto exchange within a time period of 5-7 days.


Features of Bithumb Clone Script:

  • High-Performance Matching Engine
  • Spot Trading
  • Margin Trading
  • Futures Trading
  • P2P Trading
  • User Dashboard
  • Admin Dashboard
  • Extended Trade View
  • KYC/AML
  • Referral program
  • Crypto/Fiat Payment Gateway integration


Security Features

  • Jail Login
  • Two-Factor Authentication
  • Cloudflare Integration
  • SQL Injection Prevention
  • End-To-End Encryption Based SSL
  • Anti Denial Of Service(Dos)
  • Cross-Site Request Forgery Protection
  • Server-Side Request Forgery Protection
  • Anti-Distributed Denial Of Service

 

The above-listed are some of the basic features a Bithumb clone script possesses. Compiling altogether the outcome will be a devastating one for the rivals. Making use of this Bithumb clone script, the majority of #entrepreneurs have been ahead in the phase when compared to other competitors. Being an entrepreneur with lots and lots of plans to launch an extraordinary crypto exchange, all you have to keep in mind is to reach out to the best Bithumb clone script in the town to experience those benefits. To do so, reaching out to a professional #cryptoexchange software provider will be the finest plan of all. As they will be guiding you through the obstacles with their outstanding service.

Among various crypto exchange clone script providers available in this crypto space, only a few would be able to deliver the service which exactly meets your requirements. Picking out those genuine providers is a bit complicated as it requires more research and analysis. I have made this entire filtering process simple for you.

Out of the lot, #coinsclone grabbed my attention with its immense results.

Get to know more about COINSCLONE's accomplishments and uniqueness among the various pile of crypto-exchange software providers in the current crypto market.

Whatsapp: +91 9500575285

Skype: live:hello_20214

Mail Id: hello@coinsclone.com

For instant live demo >>>>> Bithumb clone script
 

Akshara Singh

Akshara Singh

1655287616

What’s the actual cost of starting a P2P crypto exchange?

Being an entrepreneur with ideas to extend your territory in the crypto industry, probably you would be knowing what will be the most effective #business model to run. Yes! The crypto exchange is the best-suited choice for this. Among various types of crypto exchanges, P2P crypto exchange has been in the top position because of the flexibility being offered to its users. Considering the security and privacy many crypto users started adopting this as their tool for crypto purchase. Because of the huge user base and increasing demand, this P2P #crypto exchange stood as an opportunity for emerging entrepreneurs. Before getting started with this new business model, being an #entrepreneur it is a must to have an in-depth knowledge of the cost required to deploy such exchange. 

Basically, #p2p crypto exchange can be deployed in multiple ways where the cost plays an important role in the development method,

  1. Developing on own
  2. Starting it from the scratch
  3. Using a P2P crypto exchange script

 

As we are in a competitive world, delaying an opportunity by minutes can even be a huge drawback in the future. Also, we know that Time plays a huge role in this field, considering the time period and increasing competition, I hereby conclude that the P2P crypto exchange script will be the best ever choice for an entrepreneur to set up a stunning P2P #cryptoexchange in a matter of days. Also, before initiating this, one should have a detailed view of this P2P crypto exchange script.

After figuring out the most effective way for establishing a stunning P2P crypto exchange, the most vital factor that you needed to know from the initiation is the expense incorporated with this development strategy. 

As there are several factors that influence the actual cost of developing a P2P crypto exchange using the P2P crypto exchange script, let’s focus on those factors.

  • Implementation of security features
  • Operational region of your exchange
  • Customizations integrated with the exchange
  • Budget and Time constraints 
  • License for running your crypto exchange
  • Payment Processing

 

As said, these are the most vital factors that influence the overall cost of this crypto exchange development process. Taking care of those will majorly bring the budget under control. After making various surveys and reaching out to multiple crypto exchange software providers, it is clearly exposed that developing a P2P crypto exchange with the P2P crypto exchange script would roughly #cost around $6K~$14K which is not a predefined one too. 

As you’ll be spicing up your crypto exchange with various customizations of your choice that suits your business, the price may vary. After knowing the price range you’d be plotting the business model more clearly. 

Here’s an extra tip. Before fixing the budget, ensure to have an extensive idea of cutting off your unwanted expense on this crypto exchange development cost. And proceed with crafting your dream business in an affordable way.

Along with the cost, the next important thing you should take care of is the selection of a crypto-exchange software provider. 

To pick the right one, there are a set of complex analysis methods to be followed which consume much of your time and energy and finally make you daunted. To minimize your efforts to be implemented, I’ve done the above-mentioned analysis for my personal reference, I hope the result might be helping you.

I ended up with Coinsclone as the best-in-class crypto exchange clone script/software provider with significant years of experience. Why them?? That might be your question.  Some of their past projects and their portfolio made me choose them over others. Instead of hearing it from me, have a glance at it now >>>> COINSCLONE

Hope they might solve your problems with their high technical knowledge. 

Get in touch with their team of experts and book your free demo.

Whatsapp: +91 9500575285

Skype: live:hello_20214

Mail Id: hello@coinsclone.com  

For an instant live demo >>>>  P2P crypto exchange script

Jonas  Wald

Jonas Wald

1654940928

Why do entrepreneurs prefer to start a crypto exchange like Binance?

In this fast-paced world it is a bitter truth that every individual has to outsmart others to make sure themselves to survive in this world. This applies to every industry. Speaking of the business platform, most of the #entrepreneurs look for a high potential #business model, in order to attain a highly recognisable position among others. The business model should also be featured in the current trend. That’s how the #crypto exchange business sought most of their attention among various business models. Wondering how it jumped off to the top position? Here’s your answer being hidden >>>> Crypto exchange business

After figuring it out, the majority of the entrepreneurs were looking forward to initiating their own crypto exchange similar to #binance. This might make you feel somewhat dizzy. Let me explain it in detail, As you know our people will be involved with a particular platform only after gaining trust to the whole level. That’s the major reason to go with the Binance type of exchange. It is an undeniable fact that the Binance exchange is the top featured crypto exchange with the maximum number of crypto users. Being a dominant crypto exchange, Binance has managed to hold upto 28.6 million users in the year #2022. Isn’t it possible to gain 10% of such massive user-base for your business? Probably it is highly possible. Also, it is possible to gain trust among the new set of people while starting similar kinda exchange.

As there are various ways to start a crypto exchange like Binance, their main choice of development is the #binanceclonescript. As previously said, this is a fast-paced world, enthusiastic entrepreneurs will look for a more simpler way to establish such a crypto exchange. That’s why they opted for the Binance clone script

Binance clone script is the most simplest way for initiating a crypto exchange similar to Binance as this crypto exchange software is equipped with all fascinating features of the Binance exchange. This is why the entire development process of establishing this crypto exchange is made easy.

After making clear the ways one by one, there is a final crucial step that has to be taken care of. It’s none other than the selection of the right crypto exchange software provider in the current crypto space. As you know a particular service can be claimed in an enriched manner only with the help of a genuine provider. That’s why it should be considered as the most important step in the crypto exchange business. Failing to choose such a genuine provider makes the entire business collapse in a matter of days due to the poor quality of the service.

Why to be a part of a disaster when having a better solution in hand. Yes! I do have the solution you search for. After a complete research for my personal use-case, I ended up with Coinsclone. Being a supreme crypto exchange software provider, they have been positioned in the top among several rivals in the market. Know what made me end up with #coinsclone 

After getting to know about them, it wouldn’t be fair to continue with the same set of words. Instead, get in touch with their team of experts who will be guiding you with the entire development phase, and clear all of your queries in a snap of time.

Whatsapp: +91 9500575285

Skype: live:hello_20214

Mail Id: hello@coinsclone.com

Have a great experience of their work and proceed with your business process effectively. 

For instant live demo >>>>> Binance clone script
 

Why do entrepreneurs prefer to start a crypto exchange like Binance?