The extreme learning machine (ELM) which is a single layer feedforward neural network provides extremely fast training speed and good generalization performance. The ELM however, has its respective drawback: it is known to be sensitive to the ill-conditioned data. To overcome the ill-conditioning problem in ELM, ELM based on ridge regression (RR-ELM) was proposed. Since RR-ELM is a biased method, ELM based on almost unbiased ridge regression (AUR-ELM) was accordingly proposed to reduce the bias in a certain extent. RR-ELM and AUR-ELM introduced in the existence of multicollinearity, depend on the regularization parameter. The regularization parameter affects the performance of both RR-ELM and AUR-ELM. There is no consensus on the selection of the regularization parameter. Although there are various methods in linear regression to select the regularization parameter, only one method based on the selection minimizing the mean squared error was used in RR-ELM. In this study, AIC, BIC and CV criteria in the context of RR-ELM and AUR-ELM were proposed as alternative methods for the selection of the regularization parameter. An experimental study was conducted on eight data sets which are widely known and used in machine learning studies. The analyzes are considered as purposive for regression studies which are the most important fields of expert systems and machine learning. The results obtained demonstrate that the selection method of the regularization parameter is significantly effective on both the generalization and particularly stability performance of RR-ELM and AUR-ELM when compared to ELM (C) 2019 Elsevier Ltd. All rights reserved.