An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weight and bias[broken anchor] levels of a network when it is simulated in a specific data environment. A learning rule may accept existing conditions (weights and biases) of the network, and will compare the expected result and actual result of the network to give new and improved values for the weights and biases. Depending on the complexity of the model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.
The learning rule is one of the factors which decides how fast or how accurately the neural network can be developed. Depending on the process to develop the network, there are three main paradigms of machine learning: