Implementation of Caputo type fractional derivative chain rule on back propagation algorithm

dc.authoridCandan, Mucahid/0000-0003-3998-6930
dc.contributor.authorCandan, Mucahid
dc.contributor.authorCubukcu, Mete
dc.date.accessioned2024-08-31T07:47:32Z
dc.date.available2024-08-31T07:47:32Z
dc.date.issued2024
dc.departmentEge Üniversitesien_US
dc.description.abstractFractional gradient computation is a challenging task for neural networks. In this study, the brief history of fractional derivation is abstracted, and the core framework of the Fa & agrave; di Bruno formula is implemented to the fractional gradient computation problem. As an analytical approach to solve the chain rule problem of fractional derivatives, the use of the Fa & agrave; di Bruno formula for the Caputo-type fractional derivative is proposed. When the fractional gradient is calculated using the proposed approach, the problem of determining the bounds of the formula for calculating the Caputo derivative is addressed. As a consequence, the fundamental problem with the fractional back-propagation algorithm is solved analytically, paving the way for the use of any differentiable and integrable activation function in case they are stable. The development of the algorithm and its practical implementation on the photovoltaic fault detection data-set is investigated. The reliability of the neural network metrics is also investigated using analysis of variance (ANOVA), the results are then presented to decide which are the best metrics and the best fractional order. Ordinary back-propagation, fractional back-propagation with and without L 2 regularization and momentum are presented in the results to show the advantages of using L 2 regularization and momentum in fractional order gradient. Consequently, the test metrics are reliable but cannot be separated from each other. By changing the batch size from 2 to full batches, the proposed system performs better with bigger batches, but adaptive moment estimation (ADAM) performs better with small batches. The cross validation shows the performance of back propagate ion neural networks have better performance than the ordinary neural networks. It is interesting that the best order for a specific data-set cannot be determined because it depends on the batch size, number of epochs and the cross-validation folds .en_US
dc.identifier.doi10.1016/j.asoc.2024.111475
dc.identifier.issn1568-4946
dc.identifier.issn1872-9681
dc.identifier.scopus2-s2.0-85187203978en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.urihttps://doi.org/10.1016/j.asoc.2024.111475
dc.identifier.urihttps://hdl.handle.net/11454/104465
dc.identifier.volume155en_US
dc.identifier.wosWOS:001221670500001en_US
dc.identifier.wosqualityN/Aen_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.relation.ispartofApplied Soft Computingen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.snmz20240831_Uen_US
dc.subjectCaputo Derivativeen_US
dc.subjectBack Propagationen_US
dc.subjectFractional Neural Networken_US
dc.subjectRegularizationen_US
dc.subjectCross Validationen_US
dc.subjectFractional Gradienten_US
dc.titleImplementation of Caputo type fractional derivative chain rule on back propagation algorithmen_US
dc.typeArticleen_US

Dosyalar