This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti's well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of "convex algorithm" in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented.
DI FIORE, C., Fanelli, S., Zellini, P. (2004). An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks. In Neural Information Processing (pp.483-488). Calcutta : Springer.
An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks
DI FIORE, CARMINE;FANELLI, STEFANO;ZELLINI, PAOLO
2004-01-01
Abstract
This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti's well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of "convex algorithm" in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented.File | Dimensione | Formato | |
---|---|---|---|
CALCUTTA.pdf
accesso aperto
Descrizione: Articolo
Dimensione
140.59 kB
Formato
Adobe PDF
|
140.59 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.