The performance of inductive learners can be negatively affected by high-dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson’s Correlation Coefficient (also known as Pearson’s Product-Moment Correlation Coefficient or simply Pearson’s r). We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real-world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.
Feature selection Selection stability Classification accuracy Filter methods Wrapper methods
Primary Language | English |
---|---|
Subjects | Engineering |
Journal Section | Computer Engineering |
Authors | |
Early Pub Date | November 22, 2023 |
Publication Date | June 1, 2024 |
Published in Issue | Year 2024 Volume: 37 Issue: 2 |