Multilayer Perceptrons

We try to solve the XOR problem which cannot be solved by a linear seperator like a Simple Perceptron implemented earlier. A Multilayer Perceptron is implemented using a feedforward Neural Network. This feedforward NN uses backpropagation for adjusting its weights and bias. The process of backpropagation basically is adjusting the values of weights and bias so as to obtain the desired outputs.

The Neural Network has 2 layers: a hidden layer and an output layer. It also exercises a bias for each of the layer to obtain proper output. Refer the notes in order to gain insights into the structure of the Neural Network.

Training Data for XOR

# First Input Second Input Output
1 0 0 0
2 0 1 1
3 1 0 1
4 1 1 0

Output after Training

The network is trained using the abovementioned data. The method used for training is stochastic gradient descent where the model is trained iteratively over the data which is selected randomly.

Training iterates 50000 times with Learning Rate = 0.1

Activation Funtion used is Sigmoid non-linear function which maps values between 0 and 1.

# First Input Second Input Output
1 0 0
2 0 1
3 1 0
4 1 1