This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algorithm for the optimal learning of feedforward neural networks. The proposed method is founded on a new concept, called "non-suspiciousness", which can be seen as a generalisation of convexity. The algorithm described in this work follows several adaptive strategies in order to avoid possible entrapments into local minima. In many cases the global minimum of the error function can be successfully computed. The paper performs also a useful comparison between the proposed method and a global optimisation algorithm of deterministic type well known in the literature.
DI FIORE, C., Fanelli, S., Zellini, P. (2002). Computational experiences of a novel global algorithm for optimal learning in MLP-networks. In Proceedings of ICONIP 2002 (pp.317-321) [10.1109/ICONIP.2002.1202185].
Computational experiences of a novel global algorithm for optimal learning in MLP-networks
DI FIORE, CARMINE;FANELLI, STEFANO;ZELLINI, PAOLO
2002-01-01
Abstract
This paper presents some numerical experiments related to a new global "pseudo-backpropagation" algorithm for the optimal learning of feedforward neural networks. The proposed method is founded on a new concept, called "non-suspiciousness", which can be seen as a generalisation of convexity. The algorithm described in this work follows several adaptive strategies in order to avoid possible entrapments into local minima. In many cases the global minimum of the error function can be successfully computed. The paper performs also a useful comparison between the proposed method and a global optimisation algorithm of deterministic type well known in the literature.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.