Implementation of Caputo type fractional derivative chain rule on back propagation algorithm

Küçük Resim Yok

Tarih

2024

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Elsevier

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

Fractional gradient computation is a challenging task for neural networks. In this study, the brief history of fractional derivation is abstracted, and the core framework of the Fa & agrave; di Bruno formula is implemented to the fractional gradient computation problem. As an analytical approach to solve the chain rule problem of fractional derivatives, the use of the Fa & agrave; di Bruno formula for the Caputo-type fractional derivative is proposed. When the fractional gradient is calculated using the proposed approach, the problem of determining the bounds of the formula for calculating the Caputo derivative is addressed. As a consequence, the fundamental problem with the fractional back-propagation algorithm is solved analytically, paving the way for the use of any differentiable and integrable activation function in case they are stable. The development of the algorithm and its practical implementation on the photovoltaic fault detection data-set is investigated. The reliability of the neural network metrics is also investigated using analysis of variance (ANOVA), the results are then presented to decide which are the best metrics and the best fractional order. Ordinary back-propagation, fractional back-propagation with and without L 2 regularization and momentum are presented in the results to show the advantages of using L 2 regularization and momentum in fractional order gradient. Consequently, the test metrics are reliable but cannot be separated from each other. By changing the batch size from 2 to full batches, the proposed system performs better with bigger batches, but adaptive moment estimation (ADAM) performs better with small batches. The cross validation shows the performance of back propagate ion neural networks have better performance than the ordinary neural networks. It is interesting that the best order for a specific data-set cannot be determined because it depends on the batch size, number of epochs and the cross-validation folds .

Açıklama

Anahtar Kelimeler

Caputo Derivative, Back Propagation, Fractional Neural Network, Regularization, Cross Validation, Fractional Gradient

Kaynak

Applied Soft Computing

WoS Q Değeri

N/A

Scopus Q Değeri

Q1

Cilt

155

Sayı

Künye