A NEW ESTIMATION APPROACH IN MACHINE LEARNING REGRESSION MODEL

Küçük Resim Yok

Tarih

2024

Yazarlar

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

University of Cincinnati

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

In recent years, machine learning has become a frequently used method for statistical estimation. Random forest regression, decision tree regression, support vector regression and polynomial regression are commonly used supervised machine learning methods. The most commonly used loss function in gradient descent during the optimization phase of these methods is the quadratic loss function, which estimates model parameters by minimizing the cost. The selection of an appropriate loss function is crucial for method selection. There are several loss functions in the literature, such as absolute loss, logarithmic loss and squared error loss. In this study, we propose the use of an inverted normal loss function, which is a finite loss function, to gain a new perspective on minimizing cost and measuring performance in machine learning regression problems. We assert that this loss function provides more accurate estimations of cost minimisation as compared to the quadratic loss function, which is an infinite loss function. This article presents a new approach based on the inverted normal loss function for optimization in regression and performance metrics in machine learning. The procedure and its advantages are illustrated using a simulation study. © INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING.

Açıklama

Anahtar Kelimeler

Cost Functions, Gradient Descent, Inverted Normal Loss Functions, Loss Functions, Machine learning, Optimization

Kaynak

International Journal of Industrial Engineering : Theory Applications and Practice

WoS Q Değeri

Scopus Q Değeri

Q3

Cilt

31

Sayı

4

Künye