Basic Perceptron
Feedforward

On the left you can see a basic perceptron. There are two input neurons (green) and one output neuron (red). Each input neuron has its own weight (gray) with the output neuron.

The uppermost input neuron applies the x values and the other neuron applies the y values. The output neuron receives the values plus bias multiplied by the weights.
Output = x * w1 + y * w2 + 1.0 * b

All together, the output neuron is able to process the values with a simple sigmoid function.

At the moment the expected output is far away from the desired output.
Let's train it.

Training

Training is error minimization. The error is the difference between real and desired output.
Error = desired - real

We will add the error to the weights with a custom learning rate n and the input i of the current input neuron.
Weight_a = Weight_a * Error * Input_a * n

Testing

On the left you can see the training result.
This perceptron just returns >=0.5(green) for values above the
function and <0.5 (red) for values under the function.

Test the perceptron with some custom values.

X Y = 0
Multilayer Perceptron
Solving XOR

You may see the last perceptron is able to solve linear separable problems.
Think about a xor bool table and you might see you can not split the results (1 and 0) in two groups with one line. That's why we need hidden neurons (blue). Both hidden neurons will simulate hidden boolean functions like AND and OR (You can recreate a xor with NAND and OR). The output neuron will process the output of both hidden neurons and hopefully return the right output.

Training

Perceptrons with hidden layers need a new train strategy.
For now we will use the backpropagation algorithm for error minimization.
I will not explain how it work, but visiting the wikipedia page
[GER] Backpropagation should help a lot.

Sometimes the algorithm stucks in a local minimum. Local minimums are everywhere and you can sometimes avoid them with an adjusted learning rate.

XOR 0 1
0 0 0
1 0 0
Testing

When everything went right, you should see a xor.
On the xor table (left), you can see the results for every combination.

The graph displays the values from the perceptron. Values over 0.85 are marked green and under 0.85 are marked red. If you can see two green zones or two red zones, everything work properly.