LL-ELM: A regularized extreme learning machine based on L-1-norm and Liu estimator


Yildirim H., Ozkale M. R.

NEURAL COMPUTING & APPLICATIONS, cilt.33, sa.16, ss.10469-10484, 2021 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 33 Sayı: 16
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1007/s00521-021-05806-0
  • Dergi Adı: NEURAL COMPUTING & APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, Index Islamicus, INSPEC, zbMATH
  • Sayfa Sayıları: ss.10469-10484
  • Anahtar Kelimeler: Extreme learning machine, Regularized extreme learning machine, Liu estimator, Ridge regression, Lasso, Elastic net
  • Çukurova Üniversitesi Adresli: Evet

Özet

In this paper, we proposed a novel regularization and variable selection algorithm called Liu-Lasso extreme learning machine (LL-ELM) in order to deal with the ELM's drawbacks like instability, poor generalizability and underfitting or overfitting due to the selection of inappropriate hidden layer nodes. Liu estimator, which is a statistically biased estimator, is considered in the learning phase of the proposed algorithm with Lasso regression approach. The proposed algorithm is compared with the conventional ELM and its variants including ELM forms based on Liu estimator (Liu-ELM), L-1-norm (Lasso-ELM), L-2-norm (Ridge-ELM) and elastic net (Enet-ELM). Convenient selection methods for the determination of tuning parameters for each algorithm have been used in comparisons. The results show that there always exists a d value such that LL-ELM overperforms either Lasso-ELM or Enet-ELM in terms of learning and generalization performance. This improvement in LL-Lasso over Lasso-ELM and Enet-ELM in the sense of testing root mean square error varies up to 27% depending on the proposed d selection methods. Consequently, LL-ELM can be considered as a competitive algorithm for both regressional and classification tasks because of easy integrability property.