Research Article
BibTex RIS Cite

The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing

Year 2021, Volume: 8 Issue: 1, 145 - 155, 15.03.2021
https://doi.org/10.21449/ijate.735155

Abstract

Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items with varying difficulty levels. Within the scope of the study, the impact of items was examined where the parameter b differentiates while the parameters a and c are kept in fixed range. To this end, eight different 2000-people item pools were designed in simulation which consist of 500 items with ability scores and varying difficulty levels. As a result of CAT simulations, RMSD, BIAS and test lengths were examined. At the end of the study, it was found that tests run by item pools with parameter b in the range that matches the ability level end up with fewer items and have a more accurate stimation. When parameter b takes value in a narrower range, estimation of ability for extreme ability values that are not consistent with parameter b required more items. It was difficult to make accurate estimations for individuals with high ability levels especially in test applications conducted with an item pool that consists of easy items, and for individuals with low ability levels in test applications conducted with an item pool consisting of difficult items.

References

  • Adams, R. (2005). Reliabilty as a measurement design effect. Studies in Educational Evaluation, 31, 162–172.
  • Baker, F. B., & Kim, S. H. (2004). Item response theory: Parameter estimation techniques. Marcel Bekker Inc.
  • Boyd, A. M., Dodd, B. & Fitzpatrick, S. (2013). A comparison of exposure control procedures in cat systems based on different measurement models for testlets. Applied Measurement in Education, 26(2), 113-115.
  • Brown, J. M., & Weiss, D. J. (1977). An adaptive testing strategy for achievement test batteries (Research Rep. No. 77-6). University of Minnesota, Department of Psychology, Psychometric Methods Program.
  • Bulut, O., & Kan, A. (2012) Application of computerized adaptive testing to entrance examination for graduate studies in Turkey. Egitim Arastirmalari-Eurasian Journal of Educational Research, 49, 61–80.
  • Chang, H. H. (2014). Psychometrics behind computerized adaptive testing. Psychometrika, 1-20.
  • Cömert, M. (2008). Bireye uyarlanmış bilgisayar destekli ölçme ve değerlendirme yazılımı geliştirilmesi [Computer-aided assessment and evaluation analysis adapted to the individual] [Unpublished master’s thesis]. Bahçeşehir University.
  • Crocker, L., & Algina, J. (1986). Introduction classical and modern test theory. Harcourt Brace Javonovich College Publishers.
  • Dodd, B. G., Koch, W. R., & Ayala, R. J. (1993). Computerized adaptive testing using the partial credit model: effects of item pool characteristics and different stopping rules. Educational and Psychological Measurement, 53(1), 61-77.
  • Eggen, T. J. H. M., & Verschoor, A. J. (2006). Optimal testing with easy or difficult items in computerized adaptive testing. Applied Psychologgical Measurement, 30(5), 379-393.
  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Lawrence Erlbaum Associates.
  • Georgiadou, E., Triantafillou, E., & Economides, A. A. (2006). Evaluation parameters for computer adaptive testing. British Journal of Educational Techonology, 37(2), 261–278.
  • Glas, C. A., & Linden, W. (2003). Computerized adaptive testing with item cloning. Applied Psychological Measurement, 27(4), 247–261.
  • Hambleton, R. K., & Swaminathan, H. (1989). Item response teory: Principles and applications. Kluwer Nijhoff Publishing.
  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Sage Publications Inc.
  • Iseri, A. I. (2002). Assessment of students' mathematics achievement through computer adaptive testing procedures [Unpublished doctoral dissertation]. Middle East Technical University.
  • Kalender, İ. (2011). Effects of different computerized adaptive testing strategies on recovery of ability [Unpublished doctoral dissertation]. Middle East Technical University.
  • Kaptan, F. (1993). Yetenek kestiriminde adaptive (bireyselleştirilmiş) test uygulaması ile geleneksel kâğıt-kalem testi uygulamasının karşılaştırılması [Comparison of adaptive (individualized) test application and traditional paper-pencil test application in ability estimation] [Unpublished doctoral dissertation]. Hacettepe University.
  • Karasar, N. (2016). Bilimsel araştırma yöntemi [Scientific Research Method]. Nobel Yayın Dağıtım.
  • Kezer, F. (2013). Comparison of the computerized adaptive testing strategies [Unpublished doctoral dissertation]. Ankara University.
  • Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Addison - Wesley.
  • Magnusson, D. (1966). Test theory. Addison-Wesley Publishing Company.
  • Maurelli, V. A., & Weiss, D. J. (1981). Factors influencing the psychometric characteristics of an adaptive testing strategy for lest batteries (Research Rep. No. 81-4). University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory.
  • McBride, J. R., & Martin, J. T. (1983). Reliability and validity of adaptive ability tests in a military design. In Weiss, D.J. (Ed.). New horizons in testing: Latent trait test theory and computerized adaptive testing. Academic Press.
  • McDonald, P. L. (2002). Computer adaptive test for measuring personality factors using item response theory [Unpublished doctoral dissertation]. The University Western of Ontario.
  • Olsen, J. B., Maynes, D. D., Slavvson, D., & Ho, K. (1989). Comparison of paper administered, computer administered and computerized adaptive achievement tests. Journal of Educational Computing Research, 5(31), 311-326.
  • Öztuna, D. (2008). An application of computerized adaptive testing in the evaluation of disability in musculoskeletal disorders [Unpublished doctoral dissertation]. Ankara Üniversitesi Sağlık Bilimleri Enstitüsü.
  • Reid, C. A., Kolakowsky-Hayner, S. A., Lewis, A. N., & Amstrong, A. J. (2007). Modern psychometric methodology: Applications of item response theory. Rehabilitation Counselling Bulletin, 50(3), 177-178.
  • Scullard, M.G. (2007). Application of item response theory based computerized adaptive testing to the strong interest inventory [Unpublished doctoral dissertation]. University of Minnesota.
  • Smits, N., Cuijpers, P., & Straten, A. (2011). Applying computerized adaptive testing to the CES-D Scale: A simulation study. Psychiatry Research, 188, 145–155.
  • Sukamolson, S. (2002). Computerized test/item banking and computerized adaptive testing for teachers and lecturers. http://www.stc.arts.chula.ac.th/ITUA/Papers_for_ITUA_Proceedings/Suphat2.pdf
  • Thompson, J. G., & Weiss, D. J. (1980). Criterion-related validity of adaptive testing strategies (Research Rep. No. 80-3). University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory.
  • Vale, C. D., & Weiss, D. J. (1975). A study of computeradministered stradaptive ability testing (Research Rep. No. 75-4). University of Minnesota, Department of Psychology, Psychometric Methods Program.
  • Veldkamp, B. P., & Linden. W. J. (2010). Designing item pools for adaptive testing. In Linden, W.J., and Glas, C.A.W.(Eds.). Elements of adaptive testing. Springer.
  • Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., Mislevy, R. J. Steinberg, L., & Thissen, D. (2000). Computerized adaptive testing: a primer. Lawrence Erlbaum Associates, Publishers.
  • Weiss, D. J. (1985). Adaptive testing by computer. Journal of Consulting and Clinical Psychology, 53(6), 774-789.
  • Weiss, D. J. (2011). Better data from better measurements using computerized adaptive testing. Journal of Methods and Measurement in the Social Sciences, 2(1), 1–27.
  • Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21(4), 361-375.

The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing

Year 2021, Volume: 8 Issue: 1, 145 - 155, 15.03.2021
https://doi.org/10.21449/ijate.735155

Abstract

Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items with varying difficulty levels. Within the scope of the study, the impact of items was examined where the parameter b differentiates while the parameters a and c are kept in fixed range. To this end, eight different 2000-people item pools were designed in simulation which consist of 500 items with ability scores and varying difficulty levels. As a result of CAT simulations, RMSD, BIAS and test lengths were examined. At the end of the study, it was found that tests run by item pools with parameter b in the range that matches the ability level end up with fewer items and have a more accurate stimation. When parameter b takes value in a narrower range, estimation of ability for extreme ability values that are not consistent with parameter b required more items. It was difficult to make accurate estimations for individuals with high ability levels especially in test applications conducted with an item pool that consists of easy items, and for individuals with low ability levels in test applications conducted with an item pool consisting of difficult items.

References

  • Adams, R. (2005). Reliabilty as a measurement design effect. Studies in Educational Evaluation, 31, 162–172.
  • Baker, F. B., & Kim, S. H. (2004). Item response theory: Parameter estimation techniques. Marcel Bekker Inc.
  • Boyd, A. M., Dodd, B. & Fitzpatrick, S. (2013). A comparison of exposure control procedures in cat systems based on different measurement models for testlets. Applied Measurement in Education, 26(2), 113-115.
  • Brown, J. M., & Weiss, D. J. (1977). An adaptive testing strategy for achievement test batteries (Research Rep. No. 77-6). University of Minnesota, Department of Psychology, Psychometric Methods Program.
  • Bulut, O., & Kan, A. (2012) Application of computerized adaptive testing to entrance examination for graduate studies in Turkey. Egitim Arastirmalari-Eurasian Journal of Educational Research, 49, 61–80.
  • Chang, H. H. (2014). Psychometrics behind computerized adaptive testing. Psychometrika, 1-20.
  • Cömert, M. (2008). Bireye uyarlanmış bilgisayar destekli ölçme ve değerlendirme yazılımı geliştirilmesi [Computer-aided assessment and evaluation analysis adapted to the individual] [Unpublished master’s thesis]. Bahçeşehir University.
  • Crocker, L., & Algina, J. (1986). Introduction classical and modern test theory. Harcourt Brace Javonovich College Publishers.
  • Dodd, B. G., Koch, W. R., & Ayala, R. J. (1993). Computerized adaptive testing using the partial credit model: effects of item pool characteristics and different stopping rules. Educational and Psychological Measurement, 53(1), 61-77.
  • Eggen, T. J. H. M., & Verschoor, A. J. (2006). Optimal testing with easy or difficult items in computerized adaptive testing. Applied Psychologgical Measurement, 30(5), 379-393.
  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Lawrence Erlbaum Associates.
  • Georgiadou, E., Triantafillou, E., & Economides, A. A. (2006). Evaluation parameters for computer adaptive testing. British Journal of Educational Techonology, 37(2), 261–278.
  • Glas, C. A., & Linden, W. (2003). Computerized adaptive testing with item cloning. Applied Psychological Measurement, 27(4), 247–261.
  • Hambleton, R. K., & Swaminathan, H. (1989). Item response teory: Principles and applications. Kluwer Nijhoff Publishing.
  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Sage Publications Inc.
  • Iseri, A. I. (2002). Assessment of students' mathematics achievement through computer adaptive testing procedures [Unpublished doctoral dissertation]. Middle East Technical University.
  • Kalender, İ. (2011). Effects of different computerized adaptive testing strategies on recovery of ability [Unpublished doctoral dissertation]. Middle East Technical University.
  • Kaptan, F. (1993). Yetenek kestiriminde adaptive (bireyselleştirilmiş) test uygulaması ile geleneksel kâğıt-kalem testi uygulamasının karşılaştırılması [Comparison of adaptive (individualized) test application and traditional paper-pencil test application in ability estimation] [Unpublished doctoral dissertation]. Hacettepe University.
  • Karasar, N. (2016). Bilimsel araştırma yöntemi [Scientific Research Method]. Nobel Yayın Dağıtım.
  • Kezer, F. (2013). Comparison of the computerized adaptive testing strategies [Unpublished doctoral dissertation]. Ankara University.
  • Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Addison - Wesley.
  • Magnusson, D. (1966). Test theory. Addison-Wesley Publishing Company.
  • Maurelli, V. A., & Weiss, D. J. (1981). Factors influencing the psychometric characteristics of an adaptive testing strategy for lest batteries (Research Rep. No. 81-4). University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory.
  • McBride, J. R., & Martin, J. T. (1983). Reliability and validity of adaptive ability tests in a military design. In Weiss, D.J. (Ed.). New horizons in testing: Latent trait test theory and computerized adaptive testing. Academic Press.
  • McDonald, P. L. (2002). Computer adaptive test for measuring personality factors using item response theory [Unpublished doctoral dissertation]. The University Western of Ontario.
  • Olsen, J. B., Maynes, D. D., Slavvson, D., & Ho, K. (1989). Comparison of paper administered, computer administered and computerized adaptive achievement tests. Journal of Educational Computing Research, 5(31), 311-326.
  • Öztuna, D. (2008). An application of computerized adaptive testing in the evaluation of disability in musculoskeletal disorders [Unpublished doctoral dissertation]. Ankara Üniversitesi Sağlık Bilimleri Enstitüsü.
  • Reid, C. A., Kolakowsky-Hayner, S. A., Lewis, A. N., & Amstrong, A. J. (2007). Modern psychometric methodology: Applications of item response theory. Rehabilitation Counselling Bulletin, 50(3), 177-178.
  • Scullard, M.G. (2007). Application of item response theory based computerized adaptive testing to the strong interest inventory [Unpublished doctoral dissertation]. University of Minnesota.
  • Smits, N., Cuijpers, P., & Straten, A. (2011). Applying computerized adaptive testing to the CES-D Scale: A simulation study. Psychiatry Research, 188, 145–155.
  • Sukamolson, S. (2002). Computerized test/item banking and computerized adaptive testing for teachers and lecturers. http://www.stc.arts.chula.ac.th/ITUA/Papers_for_ITUA_Proceedings/Suphat2.pdf
  • Thompson, J. G., & Weiss, D. J. (1980). Criterion-related validity of adaptive testing strategies (Research Rep. No. 80-3). University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory.
  • Vale, C. D., & Weiss, D. J. (1975). A study of computeradministered stradaptive ability testing (Research Rep. No. 75-4). University of Minnesota, Department of Psychology, Psychometric Methods Program.
  • Veldkamp, B. P., & Linden. W. J. (2010). Designing item pools for adaptive testing. In Linden, W.J., and Glas, C.A.W.(Eds.). Elements of adaptive testing. Springer.
  • Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., Mislevy, R. J. Steinberg, L., & Thissen, D. (2000). Computerized adaptive testing: a primer. Lawrence Erlbaum Associates, Publishers.
  • Weiss, D. J. (1985). Adaptive testing by computer. Journal of Consulting and Clinical Psychology, 53(6), 774-789.
  • Weiss, D. J. (2011). Better data from better measurements using computerized adaptive testing. Journal of Methods and Measurement in the Social Sciences, 2(1), 1–27.
  • Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21(4), 361-375.
There are 38 citations in total.

Details

Primary Language English
Subjects Studies on Education
Journal Section Articles
Authors

Fatih Kezer 0000-0001-9640-3004

Publication Date March 15, 2021
Submission Date May 10, 2020
Published in Issue Year 2021 Volume: 8 Issue: 1

Cite

APA Kezer, F. (2021). The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing. International Journal of Assessment Tools in Education, 8(1), 145-155. https://doi.org/10.21449/ijate.735155

23823             23825             23824