Traning Multi-layer perceptron is not an easy task and there are many steps that you must follow to train an MLP and to get most out of it and if you just miss any of these steps then everything just goes into the dust. So please don’t forget to do any of these steps before creating your MLP model.

Steps Required

  1. Data Preprocessing
  2. Weights Initialization
  3. Choosing the right activation function
  4. Batch Normalization
  5. Adding Dropouts
  6. Using Optimizer
  7. Hyperparameters
  8. Loss-Function

1. Data Preprocessing

Data preprocessing is one of the most important steps in any machine learning or deep learning projects and if you not going to use data preprocessing then any model that you make is simply useless.

Data that we get from the real world is incomplete, inconsistent, inaccurate (contains errors or outliers), and often lacks specific attribute values/trends. So this is where the** Data Preprocessing** comes for the rescue . In data preprocessing we basically clean the data, fill up the missing value find and remove then outlier.

Steps of Data Preprocessing:

1 **Data Cleaning: **The data we get may have many irrelevant and missing data points so to handle this part, data cleaning is done. It involves handling of missing data, noisy data, etc. and there are many more steps involved in this.

2 **Feature Scaling: **It is done in order to bring the data from a different scale to the same scale by bringing all the data values in a specified range (-1.0 to 1.0 or 0.0 to 1.0). We do Data Standardization to rescales data to have a mean of 0 and a standard deviation of 1 (unit variance). We use **_Normalization _**to rescales the values into a range of [0,1]

2. Weights Initialization

Weight initialization is used to prevent activation layers outputs from exploding gradient or vanishing gradients problem during the course of a forward and backward propagation through a deep neural network.

Weight initialization is mostly dependent on the activation function that you are using. If you are having Sigoid or tanh as your activation function then it’s better to use Xavier Initialization or Glorot normal initializer and withthe activation function like ReLu it is better to use He Normal

#deep-learning #mlp #machine-learning #deep learning

Steps You Should Follow TO Successfully Train MLP
2.60 GEEK