Test development is a complicated process that demands examining various factors, one of them being writing items of varying difficulty. It is important to use items of a different range of difficulty to ensure that the test results accurately indicate the test-taker's abilities. Therefore, the factors affecting item difficulty should be defined, and item difficulties should be estimated before testing. This study aims to investigate the factors that affect estimated and perceived item difficulty in the High School Entrance Examination in Türkiye and to improve estimation accuracy by giving feedback to the experts. The study started with estimating item difficulty for 40 items belonging to reading comprehension, grammar, and reasoning based on data. Then, the experts' predictions were compared with the estimated item difficulty and feedback was provided to improve the accuracy of their predictions. The study found that some item features (e.g., length and readability) did not affect the estimated difficulty but affected the experts' item difficulty perceptions. Based on these results, the study concludes that providing feedback to experts can improve the factors affecting their item difficulty estimates. So, it can help improve the quality of future tests and provide feedback to experts to improve their ability to estimate item difficulty accurately.
Item difficulty Expert prediction Feedback Language test LGS
Gazi University Ethics Committee, E77082166-604.01.02-711551, dated 02.08.2023.
Test development is a complicated process that demands examining various factors, one of them being writing items of varying difficulty. It is important to use items of a different range of difficulty to ensure that the test results accurately indicate the test-taker's abilities. Therefore, the factors affecting item difficulty should be defined, and item difficulties should be estimated before testing. This study aims to investigate the factors that affect estimated and perceived item difficulty in the High School Entrance Examination in Türkiye and to improve estimation accuracy by giving feedback to the experts. The study started with estimating item difficulty for 40 items belonging to reading comprehension, grammar, and reasoning based on data. Then, the experts' predictions were compared with the estimated item difficulty and feedback was provided to improve the accuracy of their predictions. The study found that some item features (e.g., length and readability) did not affect the estimated difficulty but affected the experts' item difficulty perceptions. Based on these results, the study concludes that providing feedback to experts can improve the factors affecting their item difficulty estimates. So, it can help improve the quality of future tests and provide feedback to experts to improve their ability to estimate item difficulty accurately.
Item difficulty Expert prediction Feedback Language test LGS
This research was presented as an oral presentation at the NCME 2023 congress APRIL 12-15, 2023 - CHICAGO, IL, USA.
Birincil Dil | İngilizce |
---|---|
Konular | Eğitimde ve Psikolojide Ölçme Teorileri ve Uygulamaları |
Bölüm | Makaleler |
Yazarlar | |
Erken Görünüm Tarihi | 22 Mayıs 2024 |
Yayımlanma Tarihi | 20 Haziran 2024 |
Gönderilme Tarihi | 15 Ekim 2023 |
Kabul Tarihi | 2 Mayıs 2024 |
Yayımlandığı Sayı | Yıl 2024 |