Multifeedback-layer neural network
Küçük Resim Yok
Tarih
2007
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Ieee-Inst Electrical Electronics Engineers Inc
Erişim Hakkı
info:eu-repo/semantics/closedAccess
Özet
The architecture and training procedure of a novel recurrent neural network (RNN), referred to as the multifeedback-layer neural network (MFLNN), is described in this paper. The main difference of the proposed network compared to the available RNNs is that the temporal relations are provided by means of neurons arranged in three feedback layers, not by simple feedback elements, in order to enrich the representation capabilities of the recurrent networks. The feedback layers provide local and global recurrences via nonlinear processing elements. In these feedback layers, weighted sums of the delayed outputs of the hidden and of the output layers are passed through certain activation functions and applied to the feedforward neurons via adjustable weights. Both online and offline training procedures based on the backpropagation through time (BPTT) algorithm are developed. The adjoint model of the MFLNN is built to compute the derivatives with respect to the MFLNN weights which are then used in the training procedures. The Levenberg-Marquardt (LM) method with a trust region approach is used to update the MFLNN weights. The performance of the MFLNN is demonstrated by applying to several illustrative temporal problems including chaotic time series prediction and nonlinear dynamic system identification, and it performed better than several networks available in the literature.
Açıklama
Anahtar Kelimeler
adjoint model, backpropagation through time (BPTT), identification, Levenberg-Marquardt (LM), prediction, recurrent neural network (RNN)
Kaynak
Ieee Transactions on Neural Networks
WoS Q Değeri
Q1
Scopus Q Değeri
N/A
Cilt
18
Sayı
2