A Novel Regularized Extreme Learning Machine Based on L1 -Norm and L2 -Norm: a Sparsity Solution Alternative to Lasso and Elastic Net


Yıldırım H., Özkale M. R.

Cognitive Computation, cilt.16, sa.2, ss.641-653, 2024 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 16 Sayı: 2
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1007/s12559-023-10220-w
  • Dergi Adı: Cognitive Computation
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Psycinfo
  • Sayfa Sayıları: ss.641-653
  • Anahtar Kelimeler: Extreme learning machine, Ill-posed problems, Liu regression, Sparsity, Tikhonov regularization
  • Çukurova Üniversitesi Adresli: Evet

Özet

The aim of this study is to present a new regularized extreme learning machine (ELM) algorithm that can perform variable selection based on the simultaneous use of both ridge and Liu regressions in order to cope with some disadvantages of ELM and its variants such as instability and poor generalization performance and lack of sparsity. The proposed algorithm was compared with the classical ELM as well as the variants based on ridge, Liu, Lasso and Elastic Net approaches by cross-validation process and best tuning parameter over seven different real-world applications and their performances were presented comparatively. The proposed algorithm outperformed ridge, Lasso and Elastic Net algorithms in training performance prediction (average 40%) and stability (average 80%) and in test performance prediction (average 20%) and stability (60%) in the majority of the data. In addition, the proposed ELM was found to be more compact (better sparsity capability) with lower norm values. The results confirmed that the proposed ELM presents more stable and sparse solutions with better generalization performance than any other algorithm under favorable conditions. The findings based on experimental study via real-world applications indicate that the proposed ELM provides effective solutions to the mentioned drawbacks and yields more stable and sparse performance with better generalization capability than its competitors. Consequently, the proposed algorithm represents a powerful alternative both regression and classification tasks in machine learning field due to its theoretical flexibility.