Multi Layer Perceptron is supervised machine learning algorithm in which you train data using layers. It is a feedforward artificial neural network that generates a set of outputs from a set of inputs. In this algorithm, you have to initialize weights and bias randomly, in this post we will explain two layer multi layer perceptron. In two layer perceptron we have one input layer, one hidden layer and one output layer. By rule, we have to not consider input layer, so total two layer we have in this structural approach.
Training Model in multilayer perceptron
Every machine learning algorithm have their own way of training data or model and extract or conclude results from that trained model. In this multilayer perceptron there are three basic steps to train the model. It is very important to understand these steps otherwise you will unable to understand whole multilayer perceptron phenomena. Three steps are
1. Forward pass
2. Calculate Error or Loss
3. Backward pass
These three steps have specific functions and responsibilities. Now i will explain in simple words or shortly i will say that in forward pass you move from input layer to output layer, to calculate error you take difference of target value with output (target value will already be given because it is supervised machine learning and output value is taken from output layer). In backward pass you move from output layer to input layer. In all these steps you are dealing with updating weights. What is weight or bias i will explain you practically in future in this post.
What is weight and bias in multilayer perceptron ?
When you finding hidden layer values, there is relation H1 = x1.w1 + x2.w2 + bias1. In this relation x1,x2 are total inputs and w1 or w2 are weights of H1 only. Weight are any random values you assign. There must be one bias with one layer, for example there are two layers then there will be two layers one for hidden layer and other for output layer( input layer is not consider as layer ).