Araştırma Makalesi
BibTex RIS Kaynak Göster

The Use of Three-Option Multiple Choice Items for Classroom Assessment

Yıl 2018, Cilt: 5 Sayı: 2, 314 - 324, 19.05.2018
https://doi.org/10.21449/ijate.421167

Öz

Although multiple-choice items (MCIs) are widely used for classroom assessment, designing MCIs with sufficient number of plausible distracters is very challenging for teachers. In this regard, previous empirical studies reveal that using three-option MCIs provides various advantages when compared to four-option MCIs due to less preparation and administration time. This study examines how different elimination methods; namely, the least selected and the random methods, influence item difficulty, item discrimination and test reliability on decreasing the number of options in MCIs from four to three. The research findings have revealed that the concerning methods did not affect item difficulty, item discrimination, and test reliability negatively. Results are discussed in relation to promoting quality classroom assessment.

Kaynakça

  • Aamodt, M. G., & McShane, T. D. (1992). A meta-analytic investigation of the effect of various test item characteristics on test scores and test completion times. Public Personnel Management, 21(2), 151–160.
  • Abad, F., Olea, J., & Ponsoda, V. (2001). Analysis of the optimum number alternatives from the Item Response Theory. Psicothema, 13(1), 152-158.
  • AERA, APA, & NCME (2014). Standards for educational and psychological tests. Washington DC: American Psychological Association, American Educational Research Association, National Council on Measurement in Education.
  • Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco: Jossey-Bass.
  • Atalmis, E. H., & Kingston, N. M. (2017). Three, four, and none of the above options in multiple-choice items. Turkish Journal of Education, 6(4), 143-157.
  • Baghaei, P., & Amrahi, N. (2011). The effects of the number of options on the psychometric characteristics of multiple choice items. Psychological Test and Assessment Modeling, 53(2), 192-211.
  • Balta, N., &Eryılmaz, A. (2017). Counterintuitive dynamics test. International Journal of Science and Mathematics Education, 15(3), 411-431.
  • Chappuis, S., & Stiggins, R. J. (2002). Classroom assessment for learning. Educational leadership, 60(1), 40-44.
  • Cizek, G. J., & O'Day, D. M. (1994). Further investigation of nonfunctioning options in multiple-choice test items. Educational and Psychological Measurement, 54(4), 861-872.
  • Collins, J. (2006). Education techniques for lifelong learning: Writing multiple-choice questions 63 for continuing medical education activities and self-assessment modules. RadioGraphics, 26(2), 543-551.
  • Crehan, K.D., Haladyna, T.M., & Brewer B.W. (1993). Use of an inclusive option and the optimal number of options for multiple-choice items. Educational and Psychological Measurement, 53(1), 241-247.
  • Darling-Hammond, L., & Youngs, P. (2002). Defining “highly qualified teachers”: What does “scientifically-based research” actually tell us?. Educational researcher, 31(9), 13-25.
  • De Ayala, R. J. (2013). The theory and practice of item response theory. Guilford Publications.
  • Delgado, A. R., & Prieto, G. (1998). Further evidence favoring three-option items in multiple-choice tests. European Journal of Psychological Assessment, 14(3), 197-201.
  • Dehnad, A., Nasser, H., & Hosseini, A. F. (2014). A comparison between three-and four-option multiple choice questions. Procedia-Social and Behavioral Sciences, 98, 398-403.
  • Downing, S. M. (2005). The effects of violating standard item writing principles on tests and students: the consequences of using flawed test items on achievement examinations in medical education. Advances in Health Sciences Education, 10(2), 133-143.
  • Duhachek, A., & Iacobucci, D. (2004). Alpha's standard error (ASE): an accurate and precise confidence interval estimate. Journal of Applied Psychology, 89(5), 792 - 808.
  • Field, A. (2009). Discovering statistics using SPSS. London, England: Sage.
  • Frey, B. B., Petersen, S., Edwards, L. M., Pedrotti, J. T., & Peyton, V. (2005). Item-writing rules: Collective wisdom. Teaching and Teacher Education, 21(4), 357-364.
  • Frey, B. B., & Schmitt, V. L. (2010). Teachers’ classroom assessment practices. Middle Grades Research Journal, 5(3), 107-117.
  • Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 37-50.
  • Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-334.
  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.
  • Hambleton, R.K., Swaminathan, H., Rogers, H. J. (1991). Fundamentals of item response theory. Beverly Hills, CA: Sage.
  • Landrum, R. E., Cashin, J. R., & Theis, K. S. (1993). More evidence in favor of three-option multiple-choice tests. Educational and Psychological Measurement, 53(3), 771–778.
  • Leahy, S., Lyon, C, Thompson, M., & Wiliam, D. (2005). Classroom assessment minute by minute, day by day. Educational Leadership, 63(3), 18-24.
  • Messick, S. (1989).Validity. In R. L. Linn (Ed.), Educational measurement, (3rd ed., pp. 13-103). New York: American Council on Education and Macmillan.
  • Moreno, R., Martínez, R. J., & Muñiz, J. (2006). New guidelines for developing multiple-choice items. Methodology, 2(2), 65-72.
  • Moreno, R., Martínez, R. J., & Muñiz, J. (2015). Guidelines based on validity criteria for the development of multiple choice items. Psicothema, 27(4), 388-394.
  • Rich, C. E., & Johanson, G. A. (1990, April). An item-level analysis of “none of the above.” Paper presented at the annual meeting of the American Educational Research Association, Boston.
  • Rodriguez, M. C. (1997). The art & science of item writing: A meta-analysis of multiple choice item format effects. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.
  • Rodriguez, M. C. (2005). Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educational Measurement: Issues and Practice, 24(2), 3-13.
  • Rodriguez, M. C., Kettler, R. J., & Elliott, S. N. (2014). Distractor functioning in modified items for test accessibility. Sage Open, 4(4), 1-10.
  • Rogers, W.T., & Harley, D. (1999). An empirical comparison of three- and four-choice items and tests: susceptibility to test wiseness and internal consistency reliability. Educational and Psychological Measurement, 59(2), 234-247.
  • Shizuka, T., Takeuchi, O., Yashima, T., & Yoshizawa, K. (2006). A comparison of three-and four-option English tests for university entrance selection purposes in Japan. Language Testing, 23(1), 35-57.
  • Sidick, J.T., Barrett, G.V., & Doverspike, D. (1994). Three-alternative multiple choice14 tests: An attractive option. Personnel Psychology, 47(4), 829-835.
  • Stiggins, R. J. (1991). Relevant classroom assessment training for teachers. Educational Measurement: Issues and Practice, 10(1), 7–12.
  • Tarrant, M., Knierim, A., Hayes, S. K., & Ware, J. (2006). The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments. Nurse Education in Practice, 6(6), 354-363.
  • Tarrant, M., & Ware, J. (2010). A comparison of the psychometric properties of three-and four-option multiple-choice questions in nursing assessments. Nurse education today, 30(6), 539-543.
  • Thordike, R.M. (2005). Measurement and Evaluation in Psychology and Education (7th Ed.). Upper Saddle River, NJ: Pearson Education.
  • Trevisan, M. S., Sax, G., & Michael, W. B. (1991). The effects of the number of options per item and student ability on test validity and reliability. Educational and Psychological Measurement, 51(4), 829-837.
  • Van Zyl, J. M., Neudecker, H., & Nel, D. G. (2000). On the distribution of the maximum likelihood estimator of Cronbach’s alpha. Psychometrika, 65(1), 271–280.

The Use of Three-Option Multiple Choice Items for Classroom Assessment

Yıl 2018, Cilt: 5 Sayı: 2, 314 - 324, 19.05.2018
https://doi.org/10.21449/ijate.421167

Öz

Although multiple-choice
items (MCIs) are widely used for classroom assessment, designing MCIs with sufficient
number of plausible distracters is very challenging for teachers. In this regard,
previous empirical studies reveal that using three-option MCIs provides various
advantages when compared to four-option MCIs due to less preparation and administration
time. This study examines how different elimination methods; namely, the least selected
and the random methods, influence item difficulty, item discrimination and test
reliability on decreasing the number of options in MCIs from four to three. The
research findings have revealed that the concerning methods did not affect item
difficulty, item discrimination, and test reliability negatively. Results are discussed
in relation to promoting quality classroom assessment.

Kaynakça

  • Aamodt, M. G., & McShane, T. D. (1992). A meta-analytic investigation of the effect of various test item characteristics on test scores and test completion times. Public Personnel Management, 21(2), 151–160.
  • Abad, F., Olea, J., & Ponsoda, V. (2001). Analysis of the optimum number alternatives from the Item Response Theory. Psicothema, 13(1), 152-158.
  • AERA, APA, & NCME (2014). Standards for educational and psychological tests. Washington DC: American Psychological Association, American Educational Research Association, National Council on Measurement in Education.
  • Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco: Jossey-Bass.
  • Atalmis, E. H., & Kingston, N. M. (2017). Three, four, and none of the above options in multiple-choice items. Turkish Journal of Education, 6(4), 143-157.
  • Baghaei, P., & Amrahi, N. (2011). The effects of the number of options on the psychometric characteristics of multiple choice items. Psychological Test and Assessment Modeling, 53(2), 192-211.
  • Balta, N., &Eryılmaz, A. (2017). Counterintuitive dynamics test. International Journal of Science and Mathematics Education, 15(3), 411-431.
  • Chappuis, S., & Stiggins, R. J. (2002). Classroom assessment for learning. Educational leadership, 60(1), 40-44.
  • Cizek, G. J., & O'Day, D. M. (1994). Further investigation of nonfunctioning options in multiple-choice test items. Educational and Psychological Measurement, 54(4), 861-872.
  • Collins, J. (2006). Education techniques for lifelong learning: Writing multiple-choice questions 63 for continuing medical education activities and self-assessment modules. RadioGraphics, 26(2), 543-551.
  • Crehan, K.D., Haladyna, T.M., & Brewer B.W. (1993). Use of an inclusive option and the optimal number of options for multiple-choice items. Educational and Psychological Measurement, 53(1), 241-247.
  • Darling-Hammond, L., & Youngs, P. (2002). Defining “highly qualified teachers”: What does “scientifically-based research” actually tell us?. Educational researcher, 31(9), 13-25.
  • De Ayala, R. J. (2013). The theory and practice of item response theory. Guilford Publications.
  • Delgado, A. R., & Prieto, G. (1998). Further evidence favoring three-option items in multiple-choice tests. European Journal of Psychological Assessment, 14(3), 197-201.
  • Dehnad, A., Nasser, H., & Hosseini, A. F. (2014). A comparison between three-and four-option multiple choice questions. Procedia-Social and Behavioral Sciences, 98, 398-403.
  • Downing, S. M. (2005). The effects of violating standard item writing principles on tests and students: the consequences of using flawed test items on achievement examinations in medical education. Advances in Health Sciences Education, 10(2), 133-143.
  • Duhachek, A., & Iacobucci, D. (2004). Alpha's standard error (ASE): an accurate and precise confidence interval estimate. Journal of Applied Psychology, 89(5), 792 - 808.
  • Field, A. (2009). Discovering statistics using SPSS. London, England: Sage.
  • Frey, B. B., Petersen, S., Edwards, L. M., Pedrotti, J. T., & Peyton, V. (2005). Item-writing rules: Collective wisdom. Teaching and Teacher Education, 21(4), 357-364.
  • Frey, B. B., & Schmitt, V. L. (2010). Teachers’ classroom assessment practices. Middle Grades Research Journal, 5(3), 107-117.
  • Haladyna, T. M., & Downing, S. M. (1989). A taxonomy of multiple-choice item-writing rules. Applied Measurement in Education, 2(1), 37-50.
  • Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-334.
  • Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and validating test items. Routledge.
  • Hambleton, R.K., Swaminathan, H., Rogers, H. J. (1991). Fundamentals of item response theory. Beverly Hills, CA: Sage.
  • Landrum, R. E., Cashin, J. R., & Theis, K. S. (1993). More evidence in favor of three-option multiple-choice tests. Educational and Psychological Measurement, 53(3), 771–778.
  • Leahy, S., Lyon, C, Thompson, M., & Wiliam, D. (2005). Classroom assessment minute by minute, day by day. Educational Leadership, 63(3), 18-24.
  • Messick, S. (1989).Validity. In R. L. Linn (Ed.), Educational measurement, (3rd ed., pp. 13-103). New York: American Council on Education and Macmillan.
  • Moreno, R., Martínez, R. J., & Muñiz, J. (2006). New guidelines for developing multiple-choice items. Methodology, 2(2), 65-72.
  • Moreno, R., Martínez, R. J., & Muñiz, J. (2015). Guidelines based on validity criteria for the development of multiple choice items. Psicothema, 27(4), 388-394.
  • Rich, C. E., & Johanson, G. A. (1990, April). An item-level analysis of “none of the above.” Paper presented at the annual meeting of the American Educational Research Association, Boston.
  • Rodriguez, M. C. (1997). The art & science of item writing: A meta-analysis of multiple choice item format effects. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.
  • Rodriguez, M. C. (2005). Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educational Measurement: Issues and Practice, 24(2), 3-13.
  • Rodriguez, M. C., Kettler, R. J., & Elliott, S. N. (2014). Distractor functioning in modified items for test accessibility. Sage Open, 4(4), 1-10.
  • Rogers, W.T., & Harley, D. (1999). An empirical comparison of three- and four-choice items and tests: susceptibility to test wiseness and internal consistency reliability. Educational and Psychological Measurement, 59(2), 234-247.
  • Shizuka, T., Takeuchi, O., Yashima, T., & Yoshizawa, K. (2006). A comparison of three-and four-option English tests for university entrance selection purposes in Japan. Language Testing, 23(1), 35-57.
  • Sidick, J.T., Barrett, G.V., & Doverspike, D. (1994). Three-alternative multiple choice14 tests: An attractive option. Personnel Psychology, 47(4), 829-835.
  • Stiggins, R. J. (1991). Relevant classroom assessment training for teachers. Educational Measurement: Issues and Practice, 10(1), 7–12.
  • Tarrant, M., Knierim, A., Hayes, S. K., & Ware, J. (2006). The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments. Nurse Education in Practice, 6(6), 354-363.
  • Tarrant, M., & Ware, J. (2010). A comparison of the psychometric properties of three-and four-option multiple-choice questions in nursing assessments. Nurse education today, 30(6), 539-543.
  • Thordike, R.M. (2005). Measurement and Evaluation in Psychology and Education (7th Ed.). Upper Saddle River, NJ: Pearson Education.
  • Trevisan, M. S., Sax, G., & Michael, W. B. (1991). The effects of the number of options per item and student ability on test validity and reliability. Educational and Psychological Measurement, 51(4), 829-837.
  • Van Zyl, J. M., Neudecker, H., & Nel, D. G. (2000). On the distribution of the maximum likelihood estimator of Cronbach’s alpha. Psychometrika, 65(1), 271–280.
Toplam 42 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Eğitim Üzerine Çalışmalar
Bölüm Makaleler
Yazarlar

Erkan Hasan Atalmış

Yayımlanma Tarihi 19 Mayıs 2018
Gönderilme Tarihi 1 Ocak 2018
Yayımlandığı Sayı Yıl 2018 Cilt: 5 Sayı: 2

Kaynak Göster

APA Atalmış, E. H. (2018). The Use of Three-Option Multiple Choice Items for Classroom Assessment. International Journal of Assessment Tools in Education, 5(2), 314-324. https://doi.org/10.21449/ijate.421167

23823             23825             23824