Research Article
BibTex RIS Cite

An Empirical Evaluation of Feature Selection Stability and Classification Accuracy

Year 2024, Volume: 37 Issue: 2, 606 - 620, 01.06.2024
https://doi.org/10.35378/gujs.998964

Abstract

The performance of inductive learners can be negatively affected by high-dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson’s Correlation Coefficient (also known as Pearson’s Product-Moment Correlation Coefficient or simply Pearson’s r). We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real-world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.

References

  • [1] Loscalzo, S., Yu, L., Ding, C., “Consensus group based stable feature selection”, Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 567-576, Paris, France, (2009).
  • [2] Kalousis, A., Prados, J., Hilario, M., “Stability of feature selection algorithms: a study on high-dimensional spaces”, Knowledge and Information Systems 12: 95-116, (2007).
  • [3] Nogueira, S., “Quantifying the stability of feature selection”, Ph.D. Thesis, University of Manchester, Manchester, United Kingdom, 21-67, (2018).
  • [4] Wang, H., Khoshgoftaaar, T.M., Liang, Q., “Stability and classification performance of feature selection techniques”, 2011 10th International Conference on Machine Learning and Applications and Workshops, 151-156, Honolulu, HI, USA, (2011).
  • [5] Drotár, P., Smékal, Z., “Stability of feature selection algorithms and its influence on prediction accuracy in biomedical datasets”, TENCON 2014 - 2014 IEEE Region 10 Conference, 1-5, Bangkok, Thailand, (2014).
  • [6] Han, Y., Yu, L., “A variance reduction framework for stable feature selection”, 2010 IEEE International Conference on Data Mining, 206-215, Sydney, NSW, Australia, (2010).
  • [7] Domingos, P., “A unified bias-variance decomposition and its applications”, Proceedings of the 17th International Conference on Machine Learning, 231-238, Stanford, CA, USA, (2000).
  • [8] Munson, M.A., Caruana, R., “On feature selection, bias-variance, and bagging”, ECML PKDD ’09: Machine Learning and Knowledge Discovery in Databases, 144-159, (2009).
  • [9] Turney, P, “Technical note: bias and the quantification of stability”, Machine Learning 20: 23-33, (1995).
  • [10] Alelyani, S., Liu, H., Wang, L., “The effect of the characteristics of the dataset on the selection stability”, 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, 970-977, Boca Raton, FL, USA, (2011).
  • [11] Gulgezen, G., Cataltepe, Z., Yu., L., “Stable and accurate feature selection”, ECML PKDD ’09: Machine Learning and Knowledge Discovery in Databases, 5781: 455-468, (2009).
  • [12] Chu, C., Hsu, A.-L., Chou, K.-H., Bandettini, P., Lin, C., “Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images”, NeuroImage, 60(1): 59-70, (2012).
  • [13] Karabulut, E., Ozel, S., Turgay, I., “Comparative study on the effect of feature selection on classification accuracy”, Procedia Technology, 1: 323-327, (2012).
  • [14] Janecek, A., Gansterer, W., Demel, M., Ecker, G., “On the relationship between feature selection and classification accuracy”, Journal of Machine Learning Research, 4: 90-105, (2008).
  • [15] Amaldi, E., Kann, V., “On the approximation of minimizing non-zero variables or unsatisfied relations in linear systems”, Theoretical Computer Science, 209(1-2): 237-260, (1998).
  • [16] Ang, J.C., Mirzal, A., Haron, H., Hamed, H.N.A., “Supervised, unsupervised, and semi-supervised feature selection: a review on gene selection”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 13(5): 971-989, (2015).
  • [17] Lazar, C., Taminau, J., Meganck, S., Steenhoff, D., Coletta, A., Molter, C., de Schaetzen, V., Duque, R., Bersini, H., Nowé, A., “A survey on filter techniques for feature selection in gene expression microarray analysis”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 9(4): 1106-1119, (2012).
  • [18] Kohavi, R., John, G.H., “Wrappers for feature selection”, Artificial Intelligence, 97(1-2): 273-324, (1997).
  • [19] Chandrashekar, G., Sahin, F., “A survey on feature selection methods”, Computers and Electrical Engineering, 40(1): 16-28, (2014).
  • [20] Lal, T.N., Chapelle, O., Weston, J., Elisseeff, A., “Embedded methods”, Feature Extraction, Studies in Fuzziness and Soft Computing, 207: 137-165, (2006).
  • [21] Cateni, S., Colla, V., Vannucci, M., “A hybrid feature selection method for classification purposes”, 8th European Modeling Symposium on Mathematical Modeling and Computer Simulation EMS2014, 39-44, Pisa, Italy, (2014).
  • [22] Saeys, Y., Abeel T., Peer, V.Y., “Robust feature selection using ensemble feature selection techniques”, ECML PKDD 2008: Machine Learning and Knowledge Discovery in Databases, 5212: 313-325, (2008).
  • [23] Khoshgoftaar, T.M., Fazelpour, A., Wang, H., Wald, R., “A survey of stability analysis of feature subset selection techniques”, 2013 IEEE 14th International Conference on Information Reuse & Integration (IRI), 424-431, San Francisco, CA, USA, (2013).
  • [24] Khaire, U.M., Dhanalakshmi, R., “Stability of feature selection algorithm: a review”, Journal of King Saud University - Computer and Information Sciences, 34(4): 1060-1073, (2022).
  • [25] Dua, D., Graff, C., “The UCI Machine Learning Repository”, University of California, School of Information and Computer Science, Irvine, CA, http://archive.ics.uci.edu/ml, (2019).
  • [26] Alcala-Fdez, J., Fernandez, A., Luengo, J., Derrac, J., Garcia, S., Sanchez, L., Herrera, F., “KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework”, Journal of Multiple-Valued Logic and Soft Computing, 17: 255-287, (2011).
  • [27] Berrar, D., “Cross-Validation”, Encyclopedia of Bioinformatics and Computational Biology, 3: 542-545, (2018).
  • [28] Som, R.K., “Practical Sampling Techniques”, Second Edition, United Kingdom, CRC Press, Taylor & Francis Group, 389-423, (1996).
  • [29] Wang, D., Zhang, H., Liu, R., “T-Test feature selection approach based on term frequency for text categorization”, Pattern Recognition Letters, 45: 1-10, (2014).
  • [30] Reyes-Aldasoro, C.C., Bhalerao, A., “The Bhattacharyya space for feature selection and its application to texture segmentation”, Pattern Recognition, 39(5): 812-826, (2006).
  • [31] Largeron, C., Moulin, C., Gery, M., “Entropy based feature selection for text categorization”, SAC ’11: Proceedings of the 2011 ACM Symposium on Applied Computing, Taichung, Taiwan, 924-928, (2011).
  • [32] Shilaskar, S., Ghatol, A., “Feature selection for medical diagnosis evaluation for cardiovascular diseases”, Expert Systems with Applications, 40(10): 4146-4153, (2013).
  • [33] Serrano-Lopez, A., Olivas, E.S., Martín-Guerrero, J.D., Magdalena, R., Gómez-Sanchís, J., “Feature selection using ROC curves on classification problems”, IJCNN ’10: International Joint Conference on Neural Networks, 1-6, Barcelona, Spain, (2010).
  • [34] Theodoridis, S., Koutroumbas, K., “Pattern Recognition”, 4th ed., USA: Academic Press, 261-322 (2009).
  • [35] Aha, D.W., Bankert, R.L., “A comparative evaluation of sequential feature selection algorithms”, Lecture Notes in Statistics, Learning from Data, 112: 199-206, (1996).
  • [36] Alelyani, S., “On feature selection stability: a data perspective”, Ph.D. Thesis, Arizona State University, Phoenix, USA, 10-40, (2013).
  • [37] González, J., Ortega, J., Damas, M., Martín-Smith, P., Gan, J.Q., “A new multi-objective wrapper method for feature selection – accuracy and stability analysis for BCI”, Neurocomputing, 333: 407-418, (2019).
  • [38] Wang, A., Liu, H., Liu, J., Ding, H., Yang J., Chen, G., “Stable and accurate feature selection from microarray data with ensembled fast correlation based filter”, 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2996-2998, Seoul, South Korea, (2020).
  • [39] Deraeve, J., Alexander, W.H., “Fast, accurate, and stable feature selection using neural networks”, Neuroinformatics, 16(2): 253-268, (2018).
  • [40] Krizek, P., Kittler, J., Hlavac, V., “Improving stability of feature selection methods”, 12th International Conference on Computer Analysis of Images and Patterns (CAIP), 929-936, Vienna, Austria, (2007).
Year 2024, Volume: 37 Issue: 2, 606 - 620, 01.06.2024
https://doi.org/10.35378/gujs.998964

Abstract

References

  • [1] Loscalzo, S., Yu, L., Ding, C., “Consensus group based stable feature selection”, Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 567-576, Paris, France, (2009).
  • [2] Kalousis, A., Prados, J., Hilario, M., “Stability of feature selection algorithms: a study on high-dimensional spaces”, Knowledge and Information Systems 12: 95-116, (2007).
  • [3] Nogueira, S., “Quantifying the stability of feature selection”, Ph.D. Thesis, University of Manchester, Manchester, United Kingdom, 21-67, (2018).
  • [4] Wang, H., Khoshgoftaaar, T.M., Liang, Q., “Stability and classification performance of feature selection techniques”, 2011 10th International Conference on Machine Learning and Applications and Workshops, 151-156, Honolulu, HI, USA, (2011).
  • [5] Drotár, P., Smékal, Z., “Stability of feature selection algorithms and its influence on prediction accuracy in biomedical datasets”, TENCON 2014 - 2014 IEEE Region 10 Conference, 1-5, Bangkok, Thailand, (2014).
  • [6] Han, Y., Yu, L., “A variance reduction framework for stable feature selection”, 2010 IEEE International Conference on Data Mining, 206-215, Sydney, NSW, Australia, (2010).
  • [7] Domingos, P., “A unified bias-variance decomposition and its applications”, Proceedings of the 17th International Conference on Machine Learning, 231-238, Stanford, CA, USA, (2000).
  • [8] Munson, M.A., Caruana, R., “On feature selection, bias-variance, and bagging”, ECML PKDD ’09: Machine Learning and Knowledge Discovery in Databases, 144-159, (2009).
  • [9] Turney, P, “Technical note: bias and the quantification of stability”, Machine Learning 20: 23-33, (1995).
  • [10] Alelyani, S., Liu, H., Wang, L., “The effect of the characteristics of the dataset on the selection stability”, 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, 970-977, Boca Raton, FL, USA, (2011).
  • [11] Gulgezen, G., Cataltepe, Z., Yu., L., “Stable and accurate feature selection”, ECML PKDD ’09: Machine Learning and Knowledge Discovery in Databases, 5781: 455-468, (2009).
  • [12] Chu, C., Hsu, A.-L., Chou, K.-H., Bandettini, P., Lin, C., “Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images”, NeuroImage, 60(1): 59-70, (2012).
  • [13] Karabulut, E., Ozel, S., Turgay, I., “Comparative study on the effect of feature selection on classification accuracy”, Procedia Technology, 1: 323-327, (2012).
  • [14] Janecek, A., Gansterer, W., Demel, M., Ecker, G., “On the relationship between feature selection and classification accuracy”, Journal of Machine Learning Research, 4: 90-105, (2008).
  • [15] Amaldi, E., Kann, V., “On the approximation of minimizing non-zero variables or unsatisfied relations in linear systems”, Theoretical Computer Science, 209(1-2): 237-260, (1998).
  • [16] Ang, J.C., Mirzal, A., Haron, H., Hamed, H.N.A., “Supervised, unsupervised, and semi-supervised feature selection: a review on gene selection”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 13(5): 971-989, (2015).
  • [17] Lazar, C., Taminau, J., Meganck, S., Steenhoff, D., Coletta, A., Molter, C., de Schaetzen, V., Duque, R., Bersini, H., Nowé, A., “A survey on filter techniques for feature selection in gene expression microarray analysis”, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 9(4): 1106-1119, (2012).
  • [18] Kohavi, R., John, G.H., “Wrappers for feature selection”, Artificial Intelligence, 97(1-2): 273-324, (1997).
  • [19] Chandrashekar, G., Sahin, F., “A survey on feature selection methods”, Computers and Electrical Engineering, 40(1): 16-28, (2014).
  • [20] Lal, T.N., Chapelle, O., Weston, J., Elisseeff, A., “Embedded methods”, Feature Extraction, Studies in Fuzziness and Soft Computing, 207: 137-165, (2006).
  • [21] Cateni, S., Colla, V., Vannucci, M., “A hybrid feature selection method for classification purposes”, 8th European Modeling Symposium on Mathematical Modeling and Computer Simulation EMS2014, 39-44, Pisa, Italy, (2014).
  • [22] Saeys, Y., Abeel T., Peer, V.Y., “Robust feature selection using ensemble feature selection techniques”, ECML PKDD 2008: Machine Learning and Knowledge Discovery in Databases, 5212: 313-325, (2008).
  • [23] Khoshgoftaar, T.M., Fazelpour, A., Wang, H., Wald, R., “A survey of stability analysis of feature subset selection techniques”, 2013 IEEE 14th International Conference on Information Reuse & Integration (IRI), 424-431, San Francisco, CA, USA, (2013).
  • [24] Khaire, U.M., Dhanalakshmi, R., “Stability of feature selection algorithm: a review”, Journal of King Saud University - Computer and Information Sciences, 34(4): 1060-1073, (2022).
  • [25] Dua, D., Graff, C., “The UCI Machine Learning Repository”, University of California, School of Information and Computer Science, Irvine, CA, http://archive.ics.uci.edu/ml, (2019).
  • [26] Alcala-Fdez, J., Fernandez, A., Luengo, J., Derrac, J., Garcia, S., Sanchez, L., Herrera, F., “KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework”, Journal of Multiple-Valued Logic and Soft Computing, 17: 255-287, (2011).
  • [27] Berrar, D., “Cross-Validation”, Encyclopedia of Bioinformatics and Computational Biology, 3: 542-545, (2018).
  • [28] Som, R.K., “Practical Sampling Techniques”, Second Edition, United Kingdom, CRC Press, Taylor & Francis Group, 389-423, (1996).
  • [29] Wang, D., Zhang, H., Liu, R., “T-Test feature selection approach based on term frequency for text categorization”, Pattern Recognition Letters, 45: 1-10, (2014).
  • [30] Reyes-Aldasoro, C.C., Bhalerao, A., “The Bhattacharyya space for feature selection and its application to texture segmentation”, Pattern Recognition, 39(5): 812-826, (2006).
  • [31] Largeron, C., Moulin, C., Gery, M., “Entropy based feature selection for text categorization”, SAC ’11: Proceedings of the 2011 ACM Symposium on Applied Computing, Taichung, Taiwan, 924-928, (2011).
  • [32] Shilaskar, S., Ghatol, A., “Feature selection for medical diagnosis evaluation for cardiovascular diseases”, Expert Systems with Applications, 40(10): 4146-4153, (2013).
  • [33] Serrano-Lopez, A., Olivas, E.S., Martín-Guerrero, J.D., Magdalena, R., Gómez-Sanchís, J., “Feature selection using ROC curves on classification problems”, IJCNN ’10: International Joint Conference on Neural Networks, 1-6, Barcelona, Spain, (2010).
  • [34] Theodoridis, S., Koutroumbas, K., “Pattern Recognition”, 4th ed., USA: Academic Press, 261-322 (2009).
  • [35] Aha, D.W., Bankert, R.L., “A comparative evaluation of sequential feature selection algorithms”, Lecture Notes in Statistics, Learning from Data, 112: 199-206, (1996).
  • [36] Alelyani, S., “On feature selection stability: a data perspective”, Ph.D. Thesis, Arizona State University, Phoenix, USA, 10-40, (2013).
  • [37] González, J., Ortega, J., Damas, M., Martín-Smith, P., Gan, J.Q., “A new multi-objective wrapper method for feature selection – accuracy and stability analysis for BCI”, Neurocomputing, 333: 407-418, (2019).
  • [38] Wang, A., Liu, H., Liu, J., Ding, H., Yang J., Chen, G., “Stable and accurate feature selection from microarray data with ensembled fast correlation based filter”, 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2996-2998, Seoul, South Korea, (2020).
  • [39] Deraeve, J., Alexander, W.H., “Fast, accurate, and stable feature selection using neural networks”, Neuroinformatics, 16(2): 253-268, (2018).
  • [40] Krizek, P., Kittler, J., Hlavac, V., “Improving stability of feature selection methods”, 12th International Conference on Computer Analysis of Images and Patterns (CAIP), 929-936, Vienna, Austria, (2007).
There are 40 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Computer Engineering
Authors

Mustafa Büyükkeçeci 0000-0002-1970-8952

Mehmet Cudi Okur 0000-0002-0096-9087

Early Pub Date November 22, 2023
Publication Date June 1, 2024
Published in Issue Year 2024 Volume: 37 Issue: 2

Cite

APA Büyükkeçeci, M., & Okur, M. C. (2024). An Empirical Evaluation of Feature Selection Stability and Classification Accuracy. Gazi University Journal of Science, 37(2), 606-620. https://doi.org/10.35378/gujs.998964
AMA Büyükkeçeci M, Okur MC. An Empirical Evaluation of Feature Selection Stability and Classification Accuracy. Gazi University Journal of Science. June 2024;37(2):606-620. doi:10.35378/gujs.998964
Chicago Büyükkeçeci, Mustafa, and Mehmet Cudi Okur. “An Empirical Evaluation of Feature Selection Stability and Classification Accuracy”. Gazi University Journal of Science 37, no. 2 (June 2024): 606-20. https://doi.org/10.35378/gujs.998964.
EndNote Büyükkeçeci M, Okur MC (June 1, 2024) An Empirical Evaluation of Feature Selection Stability and Classification Accuracy. Gazi University Journal of Science 37 2 606–620.
IEEE M. Büyükkeçeci and M. C. Okur, “An Empirical Evaluation of Feature Selection Stability and Classification Accuracy”, Gazi University Journal of Science, vol. 37, no. 2, pp. 606–620, 2024, doi: 10.35378/gujs.998964.
ISNAD Büyükkeçeci, Mustafa - Okur, Mehmet Cudi. “An Empirical Evaluation of Feature Selection Stability and Classification Accuracy”. Gazi University Journal of Science 37/2 (June 2024), 606-620. https://doi.org/10.35378/gujs.998964.
JAMA Büyükkeçeci M, Okur MC. An Empirical Evaluation of Feature Selection Stability and Classification Accuracy. Gazi University Journal of Science. 2024;37:606–620.
MLA Büyükkeçeci, Mustafa and Mehmet Cudi Okur. “An Empirical Evaluation of Feature Selection Stability and Classification Accuracy”. Gazi University Journal of Science, vol. 37, no. 2, 2024, pp. 606-20, doi:10.35378/gujs.998964.
Vancouver Büyükkeçeci M, Okur MC. An Empirical Evaluation of Feature Selection Stability and Classification Accuracy. Gazi University Journal of Science. 2024;37(2):606-20.