Araştırma Makalesi
BibTex RIS Kaynak Göster

Data Fit Comparison of Mixture Item Response Theory Models and Traditional Models

Yıl 2018, Cilt: 5 Sayı: 2, 301 - 313, 19.05.2018
https://doi.org/10.21449/ijate.402806

Öz

The
purpose of this study is to determine the best IRT model [Rasch, 2PL, 3PL, 4PL
and mixed IRT (2 and 3PL)] for the science and technology subtest of the
Transition from Basic Education to Secondary Education (TEOG) exam, which is carried
out at national level, it is also aimed to predict the item parameters under
the best model. This study is a basic research as it contributes to the
information production which is fundamental for test development theories. The
study group of the research is composed of 5000 students who were randomly
selected from students who participated in TEOG exam in 2015. The analyses were
carried out on 17 multiple choice items in TEOG science and technology subtest.
When model fit indices were evaluated
, the MixIRT model with two parameters and three
latent classes was found to fit the data best. According to this model, when
the difficulties and discrimination averages of the items are taken into
account, it can be expressed that
items are moderately difficult and discriminative for students in latent
class-1; the items are considerably easy and able
to slightly distinguish the students in  latent class-2; the items are difficult to
the students in the third latent class and they can slightly distinguish the
students in this group.

Kaynakça

  • Barton, M. A., & Lord, F. M. (1981). An upper asymptote for the three-parameter logistic item- response model. Research Bulletin, 81-20.
  • Bolt, D. M., Cohen, A. S., & Wollack, J. A. (2001). A mixture item response for multiple-choice data. Journal of Educational and Behavioral Statistics, 26, 381–409.
  • Can, S. (2003). The analyses of secondary education institutions student selection and placement test’s verbal section with respect to item response theory models. Yayımlanmamış yüksek lisans tezi, Orta Doğu Teknik Üniversitesi, Sosyal Bilimler Enstitüsü, Ankara.
  • Cohen, A. S., & Bolt, D. M. (2005). A mixture model analysis of differential item functioning. Journal of Educational Measurement, 42(2), 133–148. doi: 10.1111/j.1745-3984.2005.00007.
  • De Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: Guilford Press.
  • De Ayala, R. J. & Santiago, S. Y. (2017). An introduction to mixture item response theory models. Journal of School Psychology, 60, 25-40. doi: 10.1016/j.jsp.2016.01.002
  • DeMars, C. (2010). Item response theory. New York: Oxford University Press.
  • Egberink, I. J., Meijer, R. R. & Veldkamp, B. P. (2010). Conscientiousness in the workplace: Applying mixture IRT to investigate scalability and predictive validity. Journal of Research in Personality, 44, 232–244.
  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Erdemir, A. (2015). Bir, iki, üç ve dört parametreli lojistik madde tepki kuramı modellerinin karşılaştırılması. Yayımlanmamış yüksek lisans tezi, Gazi Üniversitesi, Eğitim Bilimleri Enstitüsü, Ankara.
  • Finch, W. H. & French, B. F. (2012). Parameter estimation with mixture item response theory models: A monte carlo comparison of maximum likelihood and bayesian methods. Journal of Modern Applied Statistical Methods, 11(1), 167-178. doi: 10.22237/jmasm/1335845580
  • Hambleton, R. K. & Swaminathan, H. (1985). Item response theory: Principles and application. Boston: Kluwer Academic Publishers Group.
  • Hambleton, R. K., Swaminathan, H. & Rogers, H. J. (1991). Fundamentals of item response theory. California: Sage Publications Inc.
  • Kelderman, H., & Macready, G. B. (1990). The use of loglinear models for assessing differential item functioning across manifest and latent examinee groups. Journal of Educational Measurement, 27(4), 307–327.
  • Kılıç, İ. (1999). The fit of one- two- and three- parameter models of item response theory to the student selection test of the student selection and placement center. Yayımlanmamış doktora tezi, Orta Doğu Teknik Üniversitesi, Sosyal Bilimler Enstitüsü, Ankara.
  • Li, F., Cohen, A. S., Kim, S., & Cho, S. (2009). Model selection methods for mixture dichotomous IRT models. Applied Psychological Measurement, 33(5), 353–373. doi: 10.1177/0146621608326422.
  • Loken, E., & Rulison, K. L. (2010). Estimation of a four-parameter item response theory model. The British Journal of Mathematical and Statistical Psychology, 63(3), 509–25. doi:10.1348/000711009X474502
  • Maij-de Meij, A. M., Kelderman, H., & Van der Flier, H. (2008). Fitting a mixture item response theory model to personality questionnaire data: Characterizing latent classes and investigating possibilities for improving prediction. Applied Psychological Measurement, 32, 611–631.
  • Maij-de Meij, A. M., Kelderman, H. & van der Flier, H. (2010). Improvement in detection of differential item functioning using a mixture item response theory model. Multivariate Behavioral Research, 45(6), 975-999. doi:10.1080/00273171.2010.533047
  • Muthen, B. & Asparouhov, T. (2006). Item response mixture modeling: Application to tobacco dependence criteria. Addictive Behaviors, 31, 1050 – 1066.
  • Partchev, I. (2017). Simple interface to the estimation and plotting of IRT Models. R-project, Package 'irtoys' manual. 05 Şubat 2018 tarihinde https://cran.r-project.org/web/packages/irtoys/irtoys.pdf sayfasından erişilmiştir.
  • Robitzsch, A. (2018). Supplementary item response theory models. 05 Şubat 2018 tarihinde https://cran.r-project.org/web/packages/sirt/sirt.pdf sayfasından erişilmiştir.Rost, J. (1990). Rasch models in latent classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271–282.
  • Skrondal, A. & Rabe-Hesketh, S. (2007). Latent variable modelling: A survey. Scandinavian Journal of Statistics, 34(4), 712–745. doi: 10.1111/j.1467-9469.2007.00573.x
  • Vermunt, J. K., & Magidson, J. (2002). Latent class cluster analysis. In J. A. Hagenaars, & A. L. McCutcheon, Applied latent class analysis (p. 89-107). New York: Cambridge University Press.
  • Vermunt, J. K., & Madigson, J. (2004). Local independence. In A. B. M. S. Lewis Beck (Ed.), Encyclopedia of social sciences research methods (pp. 732-733). Thousand Oaks: Sage Publications.
  • von Davier, M. & Rost, J. (2017). Logistic mixture-distribution response models. In W. J. van der Linden (Ed.), Handbook of item response theory, volume one: Models (p. 393-406). Boca Raton: Chapman and Hall/CRC.
  • Waller, N. G., & Reise, S. P. (2010). Measuring psychopathology with non-standard item response theory models: Fitting the four-parameter model to the minnesota multiphasic personality inventory. Embretson, S. (Ed), New directions in psychological measurement with model-based approaches içinde (s. 147-173). Washington DC: American Psychological Association.

Data Fit Comparison of Mixture Item Response Theory Models and Traditional Models

Yıl 2018, Cilt: 5 Sayı: 2, 301 - 313, 19.05.2018
https://doi.org/10.21449/ijate.402806

Öz

The purpose of this study is to determine the best IRT model [Rasch, 2PL, 3PL, 4PL and mixed IRT (2 and 3PL)] for the science and technology subtest of the Transition from Basic Education to Secondary Education (TEOG) exam, which is carried out at national level, it is also aimed to predict the item parameters under the best model. This study is a basic research as it contributes to the information production which is fundamental for test development theories. The study group of the research is composed of 5000 students who were randomly selected from students who participated in TEOG exam in 2015. The analyses were carried out on 17 multiple choice items in TEOG science and technology subtest. When model fit indices were evaluated, the MixIRT model with two parameters and three latent classes was found to fit the data best. According to this model, when the difficulties and discrimination averages of the items are taken into account, it can be expressed that items are moderately difficult and discriminative for students in latent class-1; the items are considerably easy and able to slightly distinguish the students in latent class-2; the items are difficult to the students in the third latent class and they can slightly distinguish the students in this group.

Kaynakça

  • Barton, M. A., & Lord, F. M. (1981). An upper asymptote for the three-parameter logistic item- response model. Research Bulletin, 81-20.
  • Bolt, D. M., Cohen, A. S., & Wollack, J. A. (2001). A mixture item response for multiple-choice data. Journal of Educational and Behavioral Statistics, 26, 381–409.
  • Can, S. (2003). The analyses of secondary education institutions student selection and placement test’s verbal section with respect to item response theory models. Yayımlanmamış yüksek lisans tezi, Orta Doğu Teknik Üniversitesi, Sosyal Bilimler Enstitüsü, Ankara.
  • Cohen, A. S., & Bolt, D. M. (2005). A mixture model analysis of differential item functioning. Journal of Educational Measurement, 42(2), 133–148. doi: 10.1111/j.1745-3984.2005.00007.
  • De Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: Guilford Press.
  • De Ayala, R. J. & Santiago, S. Y. (2017). An introduction to mixture item response theory models. Journal of School Psychology, 60, 25-40. doi: 10.1016/j.jsp.2016.01.002
  • DeMars, C. (2010). Item response theory. New York: Oxford University Press.
  • Egberink, I. J., Meijer, R. R. & Veldkamp, B. P. (2010). Conscientiousness in the workplace: Applying mixture IRT to investigate scalability and predictive validity. Journal of Research in Personality, 44, 232–244.
  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Erdemir, A. (2015). Bir, iki, üç ve dört parametreli lojistik madde tepki kuramı modellerinin karşılaştırılması. Yayımlanmamış yüksek lisans tezi, Gazi Üniversitesi, Eğitim Bilimleri Enstitüsü, Ankara.
  • Finch, W. H. & French, B. F. (2012). Parameter estimation with mixture item response theory models: A monte carlo comparison of maximum likelihood and bayesian methods. Journal of Modern Applied Statistical Methods, 11(1), 167-178. doi: 10.22237/jmasm/1335845580
  • Hambleton, R. K. & Swaminathan, H. (1985). Item response theory: Principles and application. Boston: Kluwer Academic Publishers Group.
  • Hambleton, R. K., Swaminathan, H. & Rogers, H. J. (1991). Fundamentals of item response theory. California: Sage Publications Inc.
  • Kelderman, H., & Macready, G. B. (1990). The use of loglinear models for assessing differential item functioning across manifest and latent examinee groups. Journal of Educational Measurement, 27(4), 307–327.
  • Kılıç, İ. (1999). The fit of one- two- and three- parameter models of item response theory to the student selection test of the student selection and placement center. Yayımlanmamış doktora tezi, Orta Doğu Teknik Üniversitesi, Sosyal Bilimler Enstitüsü, Ankara.
  • Li, F., Cohen, A. S., Kim, S., & Cho, S. (2009). Model selection methods for mixture dichotomous IRT models. Applied Psychological Measurement, 33(5), 353–373. doi: 10.1177/0146621608326422.
  • Loken, E., & Rulison, K. L. (2010). Estimation of a four-parameter item response theory model. The British Journal of Mathematical and Statistical Psychology, 63(3), 509–25. doi:10.1348/000711009X474502
  • Maij-de Meij, A. M., Kelderman, H., & Van der Flier, H. (2008). Fitting a mixture item response theory model to personality questionnaire data: Characterizing latent classes and investigating possibilities for improving prediction. Applied Psychological Measurement, 32, 611–631.
  • Maij-de Meij, A. M., Kelderman, H. & van der Flier, H. (2010). Improvement in detection of differential item functioning using a mixture item response theory model. Multivariate Behavioral Research, 45(6), 975-999. doi:10.1080/00273171.2010.533047
  • Muthen, B. & Asparouhov, T. (2006). Item response mixture modeling: Application to tobacco dependence criteria. Addictive Behaviors, 31, 1050 – 1066.
  • Partchev, I. (2017). Simple interface to the estimation and plotting of IRT Models. R-project, Package 'irtoys' manual. 05 Şubat 2018 tarihinde https://cran.r-project.org/web/packages/irtoys/irtoys.pdf sayfasından erişilmiştir.
  • Robitzsch, A. (2018). Supplementary item response theory models. 05 Şubat 2018 tarihinde https://cran.r-project.org/web/packages/sirt/sirt.pdf sayfasından erişilmiştir.Rost, J. (1990). Rasch models in latent classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14, 271–282.
  • Skrondal, A. & Rabe-Hesketh, S. (2007). Latent variable modelling: A survey. Scandinavian Journal of Statistics, 34(4), 712–745. doi: 10.1111/j.1467-9469.2007.00573.x
  • Vermunt, J. K., & Magidson, J. (2002). Latent class cluster analysis. In J. A. Hagenaars, & A. L. McCutcheon, Applied latent class analysis (p. 89-107). New York: Cambridge University Press.
  • Vermunt, J. K., & Madigson, J. (2004). Local independence. In A. B. M. S. Lewis Beck (Ed.), Encyclopedia of social sciences research methods (pp. 732-733). Thousand Oaks: Sage Publications.
  • von Davier, M. & Rost, J. (2017). Logistic mixture-distribution response models. In W. J. van der Linden (Ed.), Handbook of item response theory, volume one: Models (p. 393-406). Boca Raton: Chapman and Hall/CRC.
  • Waller, N. G., & Reise, S. P. (2010). Measuring psychopathology with non-standard item response theory models: Fitting the four-parameter model to the minnesota multiphasic personality inventory. Embretson, S. (Ed), New directions in psychological measurement with model-based approaches içinde (s. 147-173). Washington DC: American Psychological Association.
Toplam 27 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Eğitim Üzerine Çalışmalar
Bölüm Makaleler
Yazarlar

Seher Yalçın 0000-0003-0177-6727

Yayımlanma Tarihi 19 Mayıs 2018
Gönderilme Tarihi 10 Şubat 2018
Yayımlandığı Sayı Yıl 2018 Cilt: 5 Sayı: 2

Kaynak Göster

APA Yalçın, S. (2018). Data Fit Comparison of Mixture Item Response Theory Models and Traditional Models. International Journal of Assessment Tools in Education, 5(2), 301-313. https://doi.org/10.21449/ijate.402806

23823             23825             23824