| ![]() |
Links between LVQ and Backpropagation
P: F rasconi?; M: Gori?;?; and G: Soda?
DSI Neural Network Research Center
WWW: http://www-dsi.ing.unifi.it/neural
(?) Universit?a di Firenze
Via di Santa Marta 3 - 50139 Firenze (Italy)
(?) Universit?a di Siena
Via Roma, 56 - 53100 Siena (Italy)
Abstract
In this paper we show that there are some intriguing links between the Backpropagation and LV Q algorithms. We show that Backpropagation used for training the weights of radial basis function networks exhibits an increasing competitive nature as the dispersion parameters decrease. In particular, we prove that LV Q can be regarded as a competitive learning scheme taking place in radial basis functions.
1 Introduction
Backpropagation [1] and Kohonen's LV Q [2] are probably the widely used neural network learning algorithms for pattern recognition problems. They are commonly proposed in a different theoretical framework and have given rise to many discussions in the scientific community on their effectiveness and experimental results.
The Backpropagation algorithm should be simply regarded as a very efficient scheme for gradient computation. The algorithm is in fact optimal in that its computational complexity equals the lower bound O(M2), being M the number of weights [3]. Many people, however, regard Backpropagation as a gradient descent algorithm for carrying out the optimization of the cost function 1. Backpropagation has a clear formulation, in that one looks for the global minima of the error function, while LV Q is often regarded as a heuristical algorithm closely related to K-means. It has been shown that LV Q
1In this paper, we give the term Backpropagation this interpretation.