Convergence analysis for principal component flows
Yoshizawa, Shintaro ; Helmke, Uwe ; Starkov, Konstantin
International Journal of Applied Mathematics and Computer Science, Tome 11 (2001), p. 223-236 / Harvested from The Polish Digital Mathematics Library

A common framework for analyzing the global convergence of several flows for principal component analysis is developed. It is shown that flows proposed by Brockett, Oja, Xu and others are all gradient flows and the global convergence of these flows to single equilibrium points is established. The signature of the Hessian at each critical point is determined.

Publié le : 2001-01-01
EUDML-ID : urn:eudml:doc:207501
@article{bwmeta1.element.bwnjournal-article-amcv11i1p223bwm,
     author = {Yoshizawa, Shintaro and Helmke, Uwe and Starkov, Konstantin},
     title = {Convergence analysis for principal component flows},
     journal = {International Journal of Applied Mathematics and Computer Science},
     volume = {11},
     year = {2001},
     pages = {223-236},
     zbl = {1169.93344},
     language = {en},
     url = {http://dml.mathdoc.fr/item/bwmeta1.element.bwnjournal-article-amcv11i1p223bwm}
}
Yoshizawa, Shintaro; Helmke, Uwe; Starkov, Konstantin. Convergence analysis for principal component flows. International Journal of Applied Mathematics and Computer Science, Tome 11 (2001) pp. 223-236. http://gdmltest.u-ga.fr/item/bwmeta1.element.bwnjournal-article-amcv11i1p223bwm/

[000] Baldi P. and Hornik K. (1991): Back-propagation and unsupervised learning in linear networks, In: Backpropagation: Theory, Architectures and Applications (Y. Chauvin and D.E. Rumelhart, Eds.). - Hillsdale, NJ:Erlbaum Associates.

[001] Baldi P. and Hornik K. (1995): Learning in linear neural networks: A survey. - IEEE Trans. Neural Netw., Vol.6, No.4, pp.837-858.

[002] Brockett R.W. (1991): Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems. - Lin. Algebra Appl., Vol.146, pp.79-91. | Zbl 0719.90045

[003] Helmke U. and Moore J.B. (1994): Dynamical Systems and Optimization. - London: Springer. | Zbl 0984.49001

[004] Łojasiewicz S. (1983): Sur les trajectoires du gradient d'unefonction analytique. - Seminari di Geometria, Bologna, Vol.15, pp.115-117. | Zbl 0606.58045

[005] Oja E. (1982): A simplified neuron model as a principal componentanalyzer. - J. Math. Biol., Vol.15, No.3, pp.267-273. | Zbl 0488.92012

[006] Oja E. and Karhunen J. (1985): On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a randommatrix. - J. Math. Anal. Appl., Vol.106, No.1, pp.69-84. | Zbl 0583.62077

[007] Oja E. (1989): Neural networks, principal components, and subspaces. - Int. J. Neural Syst., Vol.1, pp.61-68.

[008] Oja E., Ogawa H. and Wangviwattana J. (1992a): Principal component analysis by homogeneous neural networks, Part I: The weighted subspace criterion. - IEICE Trans. Inf. Syst., Vol.3, pp.366-375.

[009] Oja E., Ogawa H. and Wangviwattana J. (1992b): Principal component analysis by homogeneous neural networks, Part II: Analysis and extensions of the learning algorithms. - IEICE Trans. Inf. Syst., Vol.3, pp.376-382.

[010] Sanger T.D. (1989): Optimal unsupervised learning in a single-layer linear feedforward network. - Neural Netw., Vol.2, No.6, pp.459-473.

[011] Williams R. (1985): Feature discovery through error-correctinglearning. - Tech. Rep. No.8501, University of California, San Diego, Inst. of Cognitive Science.

[012] Wyatt J.L. and Elfadel I.M. (1995): Time-domain solutions of Oja's equations. - Neural Comp.,Vol.7, No.5, pp.915-922.

[013] Xu L. (1993): Least mean square error recognition principle for self organizing neural nets. - Neural Netw., Vol.6, No.5, pp.627-648.

[014] Yan W.Y., Helmke U. and Moore J.B. (1994): Global analysis of Oja's flow for neural networks. - IEEE Trans. Neural Netw., Vol.5, No.5, pp.674-683.