📖 WIPIVERSE

🔍 Currently registered entries: 34,029건

ADALINE

ADALINE (Adaptive Linear Neuron) is an early single-layer artificial neural network and a precursor to more complex neural networks like the multilayer perceptron. Developed by Bernard Widrow and Ted Hoff at Stanford University in 1960, ADALINE differs from the perceptron in its learning rule. While the perceptron's learning rule updates weights based on the thresholded output (the predicted class), ADALINE updates weights based on the linear activation function's output before thresholding. This makes ADALINE's learning rule, known as the Widrow-Hoff rule or the Least Mean Squares (LMS) rule, more mathematically tractable and suitable for gradient descent optimization.

The ADALINE model consists of the following components:

  • Inputs: ADALINE receives a vector of real-valued inputs representing the features of the input data.

  • Weights: Each input is associated with a weight, which represents the importance of that input to the final output.

  • Summation: The weighted sum of the inputs is calculated.

  • Activation Function: The summation is passed through a linear activation function (often simply the identity function). This produces a continuous output signal. Crucially, this output is used for weight updates.

  • Quantizer/Threshold: The output of the linear activation function is then passed through a threshold function (typically a step function) to produce a binary output (+1 or -1, or 0 and 1 depending on the implementation), representing the predicted class.

  • Learning Rule (Widrow-Hoff Rule/LMS): The weights are adjusted to minimize the error between the linear activation function's output and the desired target value. The weight update is proportional to this error and the input values.

ADALINE's learning process involves iterating through the training data, calculating the error, and updating the weights accordingly. This iterative process continues until the error converges to a minimum or a predefined stopping criterion is met.

While ADALINE itself is a single-layer network, its principles, particularly the LMS rule, have been influential in the development of more advanced neural networks. However, ADALINE is limited to linearly separable problems due to its single-layer architecture and linear activation function before thresholding. It cannot learn non-linear patterns in the data.