This paper presents a novel quasi-Newton method fo the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti's well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of "convex algorithm" in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented.
Di Fiore, C., Fanelli, S., & Zellini, P. (2004). An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks. In Neural Information Processing (pp.483-488). Calcutta : Springer.
Autori: | |
Autori: | Di Fiore, C; Fanelli, S; Zellini, P |
Titolo: | An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks |
Nome del convegno: | 11th International Conference, ICONIP 2004 |
Luogo del convegno: | Calcutta |
Anno del convegno: | 2004 |
Rilevanza: | Rilevanza internazionale |
Sezione: | contributo |
Data di pubblicazione: | 2004 |
Settore Scientifico Disciplinare: | Settore MAT/08 - Analisi Numerica |
Lingua: | English |
Tipologia: | Intervento a convegno |
Citazione: | Di Fiore, C., Fanelli, S., & Zellini, P. (2004). An efficient generalization of Battiti-Shanno's quasi-Newton algorithm for learning in MLP-networks. In Neural Information Processing (pp.483-488). Calcutta : Springer. |
Appare nelle tipologie: | 02 - Intervento a convegno |
File in questo prodotto:
File | Descrizione | Tipologia | Licenza | |
---|---|---|---|---|
CALCUTTA.pdf | Articolo | N/A | Open Access Visualizza/Apri |