Araştırma Makalesi
BibTex RIS Kaynak Göster

To what extent are item discrimination values realistic? A new index for two-dimensional structures

Yıl 2022, , 728 - 740, 30.09.2022
https://doi.org/10.21449/ijate.1098757

Öz

Most researchers investigate the corrected item-total correlation of items when analyzing item discrimination in multi-dimensional structures under the Classical Test Theory, which might lead to underestimating item discrimination, thereby removing items from the test. Researchers might investigate the corrected item-total correlation with the factors to which that item belongs; however, getting a general overview of the entire test is impossible. Based on this problem, this study aims to recommend a new index to investigate item discrimination in two-dimensional structures through a Monte Carlo simulation. The new item discrimination index is evaluated by identifying sample size, item discrimination value, inter-factor correlation, and the number of categories. Based upon the results of the study it can be claimed that the proposed item discrimination index proves acceptable performance for two-dimensional structures. Accordingly, using this new item discrimination index could be recommended to researchers when investigating item discrimination in two-dimensional structures.

Kaynakça

  • Ak, M.O., & Alpullu, A. (2020). Alpak akış ölçeği geliştirme ve Doğu Batı üniversitelerinin karşılaştırılması [Alpak flow scale development and comparison of east west universities]. E Journal of New World Sciences Academy, 15(1), 1 16. https://doi.org/10.12739/NWSA.2019.14.4.2B0122
  • Akyıldız, S. (2020). Eğitim programı okuryazarlığı kavramının kavramsal yönden analizi: Bir ölçek geliştirme çalışması [A conceptual analysis of curriculum literacy concept: A study of scale development]. Electronic Journal of Social Sciences, 19(73), 315–332. https://doi.org/10.17755/esosder.554205
  • Bandalos, D.L., & Leite, W. (2013). Use of Monte Carlo studies in structural equation modeling research. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (2nd ed.). Information Age.
  • Bazaldua, D.A.L., Lee, Y.-S., Keller, B., & Fellers, L. (2017). Assessing the performance of classical test theory item discrimination estimators in Monte Carlo simulations. Asia Pacific Education Review, 18, 585–598. https://doi.org/10.1007/s12564-017-9507-4
  • Brown, J.D. (1988). Tailored cloze: Improved with classical item analysis techniques. Language Testing, 5(1), 19–31. https://doi.org/10.1177/026553228800500102
  • Cho, S.-J., Li, F., & Bandalos, D.L. (2009). Accuracy of the parallel analysis procedure with polychoric correlations. Educational and Psychological Measurement, 69(5), 748–759. https://doi.org/10.1177/0013164409332229
  • Crocker, L., & Algina, J. (2008). Introduction of classical and modern test theory. Cengage Learning.
  • Cureton, E.E. (1957). The upper and lower twenty-seven per cent rule. Psychometrika, 22, 293-296. https://doi.org/10.1007/BF02289130
  • Curran, P.J., West, S.G., & Finch, J.F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1(1), 16–29. https://doi.org/10.1037/1082-989X.1.1.16
  • Çalışkan, A. (2020). Kriz yönetimi: Bir ölçek geliştirme çalışması [Crisis management: A scale development study]. Journal of Turkish Social Sciences Research, 5(2), 106–120.
  • Deakin, R. (1998). 3-D coordinate transformations. Surveying and Land Information Systems, 58(4), 223–234.
  • DeVellis, R.F. (2006). Classical test theory. Medical Care, 44(11), 50 59. https://doi.org/10.1097/01.mlr.0000245426.10853.30
  • Fan, X. (1998). Item Response Theory and Classical Test Theory: An empirical comparison of their item/person statistics. Educational and Psychological Measurement, 58(3), 357–381. https://doi.org/10.1177/0013164498058003001
  • Flora, D.B., & Curran, P.J. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods, 9(4), 466–491. https://doi.org/10.1037/1082-989X.9.4.466
  • Foldnes, N., & Grønneberg, S. (2017). The asymptotic covariance matrix and its use in simulation studies. Structural Equation Modeling: A Multidisciplinary Journal, 24(6), 881–896. https://doi.org/10.1080/10705511.2017.1341320
  • Goretzko, D., Pham, T.T.H., & Bühner, M. (2021). Exploratory factor analysis: Current use, methodological developments and recommendations for good practice. Current Psychology, 40(7), 3510–3521. https://doi.org/10.1007/s12144-019-00300-2
  • Gorsuch, R.L. (1974). Factor analysis. W. B. Saunders.
  • Green, S.B., & Salkind, N.J. (2014). Using SPSS for Windows and Macintosh: Analyzing and understanding data (7th ed.). Pearson Education.
  • Kaplan, R.M., & Saccuzzo, D.P. (2018). Psychological testing: Principles, applications, and issues. Cengage Learning.
  • Kılıç, A.F., & Koyuncu, İ. (2017). Ölçek uyarlama çalışmalarının yapı geçerliği açısından incelenmesi [Examination of scale adaptation studies in terms of construct validity]. In Ö. Demirel & S. Dinçer (Ed.), Küreselleşen dünyada eğitim [Education in the globalized world] (pp. 1202–1205). Pegem.
  • Kline, P. (2000). The handbook of psychological testing (2nd ed.). Routledge.
  • Kline, T.J.B. (2005). Psychological testing: A practical approach to design and evaluation (3rd ed.). Sage.
  • Kohli, N., Koran, J., & Henn, L. (2015). Relationships among classical test theory and item response theory frameworks via factor analytic models. Educational and Psychological Measurement, 75(3), 389–405. https://doi.org/10.1177/0013164414559071
  • Koyuncu, İ., & Kılıç, A.F. (2019). The use of exploratory and confirmatory factor analyses: A document analysis. Education and Science, 44(198), 361 388. https://doi.org/10.15390/EB.2019.7665
  • Lange, M. (2009). A tale of two vectors. Dialectica, 63(4), 397 431. https://doi.org/10.1111/j.1746-8361.2009.01207.x
  • Li, C.-H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936–949. https://doi.org/10.3758/s13428-015-0619-7
  • Liu, F. (2008). Comparison of several popular discrimination indices based on different criteria and their application in item analysis [Master of Arts, University of Georgia]. http://getd.libs.uga.edu/pdfs/liu_fu_200808_ma.pdf
  • Lozano, L.M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology, 4(2), 73–79. https://doi.org/10.1027/1614-2241.4.2.73
  • Macdonald, P., & Paunonen, S.V. (2002). A Monte Carlo comparison of item and person statistics based on item response theory versus classical test theory. Educational and Psychological Measurement, 62(6), 921 943. https://doi.org/10.1177/0013164402238082
  • Metsämuuronen, J. (2020a). Generalized discrimination index. International Journal of Educational Methodology, 6(2), 237-257. https://doi.org/10.12973/ijem.6.2.237
  • Metsämuuronen, J. (2020b). Somers' D as an alternative for the item–test and item-rest correlation coefficients in the educational measurement settings. International Journal of Educational Methodology, 6(1), 207-221. https://doi.org/10.12973/ijem.6.1.207
  • Nunnally, J.C., & Bernstein, I.H. (1994). Pschometric theory (3rd ed.). McGraw Hill.
  • R Core Team. (2021). R: A language and environment for statistical computing [Computer software]. https://www.r-project.org/
  • Tarhan, M., & Yıldırım, A. (2021). Bir ölçek geliştirme çalışması: Hemşirelikte geçiş şoku ölçeği [A scale development study: Nursing Transition Shock Scale]. University of Health Sciences Journal of Nursing, 3(1), 7 14. https://doi.org/10.48071/sbuhemsirelik.818123

To what extent are item discrimination values realistic? A new index for two-dimensional structures

Yıl 2022, , 728 - 740, 30.09.2022
https://doi.org/10.21449/ijate.1098757

Öz

Most researchers investigate the corrected item-total correlation of items when analyzing item discrimination in multi-dimensional structures under the Classical Test Theory, which might lead to underestimating item discrimination, thereby removing items from the test. Researchers might investigate the corrected item-total correlation with the factors to which that item belongs; however, getting a general overview of the entire test is impossible. Based on this problem, this study aims to recommend a new index to investigate item discrimination in two-dimensional structures through a Monte Carlo simulation. The new item discrimination index is evaluated by identifying sample size, item discrimination value, inter-factor correlation, and the number of categories. Based upon the results of the study it can be claimed that the proposed item discrimination index proves acceptable performance for two-dimensional structures. Accordingly, using this new item discrimination index could be recommended to researchers when investigating item discrimination in two-dimensional structures.

Kaynakça

  • Ak, M.O., & Alpullu, A. (2020). Alpak akış ölçeği geliştirme ve Doğu Batı üniversitelerinin karşılaştırılması [Alpak flow scale development and comparison of east west universities]. E Journal of New World Sciences Academy, 15(1), 1 16. https://doi.org/10.12739/NWSA.2019.14.4.2B0122
  • Akyıldız, S. (2020). Eğitim programı okuryazarlığı kavramının kavramsal yönden analizi: Bir ölçek geliştirme çalışması [A conceptual analysis of curriculum literacy concept: A study of scale development]. Electronic Journal of Social Sciences, 19(73), 315–332. https://doi.org/10.17755/esosder.554205
  • Bandalos, D.L., & Leite, W. (2013). Use of Monte Carlo studies in structural equation modeling research. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (2nd ed.). Information Age.
  • Bazaldua, D.A.L., Lee, Y.-S., Keller, B., & Fellers, L. (2017). Assessing the performance of classical test theory item discrimination estimators in Monte Carlo simulations. Asia Pacific Education Review, 18, 585–598. https://doi.org/10.1007/s12564-017-9507-4
  • Brown, J.D. (1988). Tailored cloze: Improved with classical item analysis techniques. Language Testing, 5(1), 19–31. https://doi.org/10.1177/026553228800500102
  • Cho, S.-J., Li, F., & Bandalos, D.L. (2009). Accuracy of the parallel analysis procedure with polychoric correlations. Educational and Psychological Measurement, 69(5), 748–759. https://doi.org/10.1177/0013164409332229
  • Crocker, L., & Algina, J. (2008). Introduction of classical and modern test theory. Cengage Learning.
  • Cureton, E.E. (1957). The upper and lower twenty-seven per cent rule. Psychometrika, 22, 293-296. https://doi.org/10.1007/BF02289130
  • Curran, P.J., West, S.G., & Finch, J.F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1(1), 16–29. https://doi.org/10.1037/1082-989X.1.1.16
  • Çalışkan, A. (2020). Kriz yönetimi: Bir ölçek geliştirme çalışması [Crisis management: A scale development study]. Journal of Turkish Social Sciences Research, 5(2), 106–120.
  • Deakin, R. (1998). 3-D coordinate transformations. Surveying and Land Information Systems, 58(4), 223–234.
  • DeVellis, R.F. (2006). Classical test theory. Medical Care, 44(11), 50 59. https://doi.org/10.1097/01.mlr.0000245426.10853.30
  • Fan, X. (1998). Item Response Theory and Classical Test Theory: An empirical comparison of their item/person statistics. Educational and Psychological Measurement, 58(3), 357–381. https://doi.org/10.1177/0013164498058003001
  • Flora, D.B., & Curran, P.J. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods, 9(4), 466–491. https://doi.org/10.1037/1082-989X.9.4.466
  • Foldnes, N., & Grønneberg, S. (2017). The asymptotic covariance matrix and its use in simulation studies. Structural Equation Modeling: A Multidisciplinary Journal, 24(6), 881–896. https://doi.org/10.1080/10705511.2017.1341320
  • Goretzko, D., Pham, T.T.H., & Bühner, M. (2021). Exploratory factor analysis: Current use, methodological developments and recommendations for good practice. Current Psychology, 40(7), 3510–3521. https://doi.org/10.1007/s12144-019-00300-2
  • Gorsuch, R.L. (1974). Factor analysis. W. B. Saunders.
  • Green, S.B., & Salkind, N.J. (2014). Using SPSS for Windows and Macintosh: Analyzing and understanding data (7th ed.). Pearson Education.
  • Kaplan, R.M., & Saccuzzo, D.P. (2018). Psychological testing: Principles, applications, and issues. Cengage Learning.
  • Kılıç, A.F., & Koyuncu, İ. (2017). Ölçek uyarlama çalışmalarının yapı geçerliği açısından incelenmesi [Examination of scale adaptation studies in terms of construct validity]. In Ö. Demirel & S. Dinçer (Ed.), Küreselleşen dünyada eğitim [Education in the globalized world] (pp. 1202–1205). Pegem.
  • Kline, P. (2000). The handbook of psychological testing (2nd ed.). Routledge.
  • Kline, T.J.B. (2005). Psychological testing: A practical approach to design and evaluation (3rd ed.). Sage.
  • Kohli, N., Koran, J., & Henn, L. (2015). Relationships among classical test theory and item response theory frameworks via factor analytic models. Educational and Psychological Measurement, 75(3), 389–405. https://doi.org/10.1177/0013164414559071
  • Koyuncu, İ., & Kılıç, A.F. (2019). The use of exploratory and confirmatory factor analyses: A document analysis. Education and Science, 44(198), 361 388. https://doi.org/10.15390/EB.2019.7665
  • Lange, M. (2009). A tale of two vectors. Dialectica, 63(4), 397 431. https://doi.org/10.1111/j.1746-8361.2009.01207.x
  • Li, C.-H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936–949. https://doi.org/10.3758/s13428-015-0619-7
  • Liu, F. (2008). Comparison of several popular discrimination indices based on different criteria and their application in item analysis [Master of Arts, University of Georgia]. http://getd.libs.uga.edu/pdfs/liu_fu_200808_ma.pdf
  • Lozano, L.M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology, 4(2), 73–79. https://doi.org/10.1027/1614-2241.4.2.73
  • Macdonald, P., & Paunonen, S.V. (2002). A Monte Carlo comparison of item and person statistics based on item response theory versus classical test theory. Educational and Psychological Measurement, 62(6), 921 943. https://doi.org/10.1177/0013164402238082
  • Metsämuuronen, J. (2020a). Generalized discrimination index. International Journal of Educational Methodology, 6(2), 237-257. https://doi.org/10.12973/ijem.6.2.237
  • Metsämuuronen, J. (2020b). Somers' D as an alternative for the item–test and item-rest correlation coefficients in the educational measurement settings. International Journal of Educational Methodology, 6(1), 207-221. https://doi.org/10.12973/ijem.6.1.207
  • Nunnally, J.C., & Bernstein, I.H. (1994). Pschometric theory (3rd ed.). McGraw Hill.
  • R Core Team. (2021). R: A language and environment for statistical computing [Computer software]. https://www.r-project.org/
  • Tarhan, M., & Yıldırım, A. (2021). Bir ölçek geliştirme çalışması: Hemşirelikte geçiş şoku ölçeği [A scale development study: Nursing Transition Shock Scale]. University of Health Sciences Journal of Nursing, 3(1), 7 14. https://doi.org/10.48071/sbuhemsirelik.818123
Toplam 34 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Alan Eğitimleri
Bölüm Makaleler
Yazarlar

Abdullah Faruk Kılıç 0000-0003-3129-1763

İbrahim Uysal 0000-0002-6767-0362

Yayımlanma Tarihi 30 Eylül 2022
Gönderilme Tarihi 5 Nisan 2022
Yayımlandığı Sayı Yıl 2022

Kaynak Göster

APA Kılıç, A. F., & Uysal, İ. (2022). To what extent are item discrimination values realistic? A new index for two-dimensional structures. International Journal of Assessment Tools in Education, 9(3), 728-740. https://doi.org/10.21449/ijate.1098757

23823             23825             23824