In this paper, we present a new class of quasi-Newton methods for the effective learning in large multilayer perceptron (MLP)-networks. The algorithms introduced in this work, named LQN, utilize an iterative scheme of a generalized BFGS-type method, involving a suitable family of matrix algebras L. The main advantages of these innovative methods are based upon the fact that they have an O(n log_2 n) complexity per step and that they require O(n) memory allocations. Numerical experiences, performed on a set of standard benchmarks of MLP-networks, show the competitivity of the LQN methods, especially for large values of n.
Bortoletti, A., DI FIORE, C., Fanelli, S., Zellini, P. (2003). A new class of quasi-newtonian methods for optimal learning in MLP-networks. IEEE TRANSACTIONS ON NEURAL NETWORKS, 14(2), 263-273.
A new class of quasi-newtonian methods for optimal learning in MLP-networks
DI FIORE, CARMINE;FANELLI, STEFANO;ZELLINI, PAOLO
2003-01-01
Abstract
In this paper, we present a new class of quasi-Newton methods for the effective learning in large multilayer perceptron (MLP)-networks. The algorithms introduced in this work, named LQN, utilize an iterative scheme of a generalized BFGS-type method, involving a suitable family of matrix algebras L. The main advantages of these innovative methods are based upon the fact that they have an O(n log_2 n) complexity per step and that they require O(n) memory allocations. Numerical experiences, performed on a set of standard benchmarks of MLP-networks, show the competitivity of the LQN methods, especially for large values of n.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.