Research Article
BibTex RIS Cite

Akran değerlendirmede analitik rubrikle ve genel izlenimle yapılan puanlamaların incelenmesi

Year 2019, Volume: 8 Issue: 4, 258 - 275, 31.10.2019
https://doi.org/10.19128/turje.609073

Abstract

Bu araştırmada, akran
değerlendirmede analitik rubrikle ve genel izlenimle yapılan puanlamaların
karşılaştırmalı olarak incelenmesi amaçlanmıştır. Araştırma 66 üniversite
öğrencisi üzerinde yürütülmüş ve bu öğrencilerden gönüllülük esasına dayalı
olarak seçilen altısı, çalışmada akran değerlendirici olarak görev almıştır.
Çalışmada, öğrencilerden bilimsel araştırma yöntemleri dersi kapsamında örnek
bir çalışma hazırlamaları ve hazırladıkları çalışmayı sınıf ortamında sunmaları
istenmiştir. Öğrenciler sunum yaparken dersin sorumlu öğretim elemanı ile akran
değerlendiriciler, çalışmaları önce analitik rubriğe göre ve ardından genel
izlenimle puanlamıştır. Puanlamadan elde edilen veriler, Rasch modeline göre
analiz edilmiştir. Araştırmada, her iki puanlama yönteminde de bireylerin
yüksek güvenirlikte birbirinden ayırt edildiği belirlenmiştir. Bununla
birlikte, analitik rubrik kullanıldığında bireylerin yetenek düzeyleri
arasındaki farklılıkların daha hassas bir biçimde ortaya konulduğu
saptanmıştır. Hem analitik rubrikle hem de genel izlenimle yapılan
değerlendirmede; akranlar ile öğretim elemanının verdiği puanlar üzerinden
hesaplanan yetenek kestirimleri arasında, pozitif yönlü yüksek korelasyonlar
bulunmuştur. Akranların analitik rubrikle ve genel izlenimle yaptıkları
puanlamalara karşılık gelen yetenek kestirimlerinin pozitif yönlü güçlü bir
ilişki içerisinde olduğu sonucuna varılmıştır.

References

  • Alharby, E. R. (2006). A comparison between two scoring methods, holistic vs. analytic using two measurement models, generalizability theory and the many facet Rasch measurement within the context of performance assessment. (Unpublished doctoral dissertation). Pennsylvania State University, Pennsylvania.
  • Alzaid, J. M. (2017). The effect of peer assessment on the evaluation process of students. International Education Studies, 10(6), 159–173. DOI: 10.5539/ies.v10n6p159
  • Amo, E., & Jareno, F. (2011). Self, peer and teacher assessment as active learning methods. Research Journal of International Studies, 18, 41–47.
  • Anita, H. (2011). The connection between analytic and holistic approaches to scoring in the writing component of the PROFEX EMP exam. Acta Medica Marisiensis, 57(3), 206–208.
  • Ashraf, H., & Mahdinezhad, M. (2015). The role of peer-assessment versus self-assessment in promoting autonomy in language use: A case of EFL learners. Iranian Journal of Language Testing, 5(2), 110–120.
  • Berry, R. (2008). Assessment for learning. Hong Kong: Hong Kong University Press.
  • Bostock, S. (2000). Student peer assessment. Retrieved from https://www.reading.ac.uk/web/files/engageinassessment/student_peer_assessment_-_stephen_bostock.pdf
  • Boud, D. (2013). Introduction: making the move to peer learning. Boud, D., Cohen, R., & Sampson, J. (Eds.), in Peer learning in higher education learning from & with each other (pp. 1–18). New York: Routledge.
  • Chi, E. (2001). Comparing holistic and analytic scoring for performance assessment with many–facet Rasch model. Journal of Applied Measurement, 2(4), 379–388.
  • Çetin, B., & Kelecioğlu, H. (2004). The relation between scores predicted from structured features of essay and scores based on scoring key and overall impression in essay type examinations. Hacettepe University Journal of Education, 26, 19–26.
  • Falchikov, N. (2001). Learning together: Peer tutoring in higher education. London: Routledge Falmer.
  • Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322. DOI: 10.3102/00346543070003287
  • Frankland, S. (2007). Peer assessments among students in a problem based learning format. In S. Frankland (Eds.), Enhancing teaching and learning through assessment: Developing an appropriate model (pp. 144–155). Dordrecht: Springer.
  • Ghalib, T. K., & Al-Hattami, A. A. (2015). Holistic versus analytic evaluation of EFL writing: A case study. English Language Teaching, 8(7), 225–236. DOI: 10.5539/elt.v8n7p225
  • Gravells, A. (2014). The award in education and training. London: Learning Matters.
  • Gronlund, N. E. (1998). Assessment of student achievement. Boston: Allyn & Bacon.
  • Güler, N., İlhan, M., Güneyli, A., & Demir, S. (2017). An evaluation of the psychometric properties of three different forms of Daly and Miller’s writing apprehension test through Rasch analysis. Educational Sciences: Theory & Practice, 17(3), 721–744. DOI: 10.12738/estp.2017.3.0051
  • Hall, E. K., & Salmon, S. J. (2003). Chocolate chip cookies and rubrics helping students understand rubrics in inclusive settings. Teaching Exceptional Children, 35(4), 8–11. DOI: 10.1177/004005990303500401
  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: SAGE Publications, Inc.
  • Hanrahan, S. J., & Isaacs, G. (2001). Assessing self– and peer–assessment: The students' views. Higher Education Research & Development, 20(1), 53–70. DOI: 10.1080/07294360123776
  • Hinkle, D. E., Wiersma, W., & Jurs, S. G. (1979). Applied statistics for the behavioral sciences. Chicago: Rand McNally College.
  • Hunter, D. M., Jones, R. M., Radhawa, B. S. (1996). The use of holistic versus analytic scoring for large–scale assessment of writing. The Canadian Journal of Program Evaluation, 11(2), 61–85.
  • Jönsson, A., & Balan, A. (2018). Analytic or holistic: A study of agreement between different grading models. Practical Assessment, Research & Evaluation, 23(12), 1–11.
  • Kan, A. (2007). An alternative method in the new educational program from the point of performance-based assessment: Rubric scoring scales. Educational Sciences: Theory & Practice, 7(1), 144–152.
  • Karaca, E. (2009). An evaluation of teacher trainees’ opinions of the peer assessment in terms of some variables. World Applied Sciences Journal, 6(1), 123–128.
  • Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26(20), 275–304. DOI: 10.1177/0265532208101008
  • Lee, M., Peterson, J. J., & Dixon, A. (2010). Rasch calibration of physical activity self-efficacy and social support scale for persons with intellectual disabilities. Research in Developmental Disabilities, 31(4), 903−913. DOI: 10.1016/j.ridd.2010.02.010
  • Lester, F. K., Lambdin, D. V., & Preston, R. V. (1997). A new vision of the nature and purposes of the assessment in the mathematics classroom. Phye, G. D. (Eds.) in Handbook of Classroom Assessment: Learning, Adjustment, and Achievement (pp. 287−320). San Diego, CA, US: Academic Press.
  • Linacre, J. M. (2002). Optimizing rating scale category effectiveness. Journal of Applied Measurement, 3(1), 85−106.
  • Linacre, J. M. (2018). A user's guide to FACETS Rasch-model computer programs. Retrieved from https://www.winsteps.com/a/Facets-Manual.pdf
  • Liu, N. F., & Carless, D. (2006). Peer feedback: the learning element of peer assessment. Teaching in Higher Education, 11(3), 279−290. DOI: 10.19080/13562510600680582
  • Macpherson, K. (1999). The development of critical thinking skills in undergraduate supervisory management units: Efficacy of student peer assessment. Assessment & Evaluation in Higher Education, 24(3), 273−284. DOI: 10.1080/0260293990240302
  • Mann, B. L. (2006). Testing the validity of the post and vote model of web-based peer assessment. In D. D. Williams, S. L. Howell, & M. Hricko (Eds.), Online assessment, measurement, and evaluation: Emerging practices (pp. 131−152). Hershey, PA: Information Science.
  • Matsuno, S. (2009). Self-, peer–, and teacher–assessments in Japanese university EFL writing classrooms. Language Testing, 26(1), 75−100. DOI: 10.1177/0265532208097337
  • Mehrdad, N., Bigdelib, S., & Ebrahimia, H. (2012). A comparative study on self, peer and teacher evaluation to evaluate clinical skills of nursing students. Procedia - Social and Behavioral Sciences, 47, 1847−1852.
  • McDonald, B. (2016). Peer assessment that works: A guide for teachers. London: Rowman & Littlefield.
  • McLeod, S. G., Brown, G. C., McDaniels, W., & Sledge, L. (2009). Improving writing with a PAL: harnessing the power of peer assisted learning with the reader’s assessment rubrics. International Journal of Teaching and Learning in Higher Education, 20(3), 488−502.
  • McNamara, T. F. (1996). Measuring second language performance. London and New York: Longman.
  • Moskal, B. M. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research & Evaluation, 7(3), 1-5. Retrieved from: https://pareonline.net/getvn.asp?v=7&n=3
  • Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7(10), 1-6. Retrieved from http://pareonline.net/getvn.asp?v=7&n=10
  • Mutwarasibo, F. (2016). University students’ attitudes towards peer assessment and reactions to peer feedback on group writing. Rwanda Journal, Series A: Arts and Humanities, 1(1), 32−48. DOI: 10.4314/rj.v1i1.4A
  • Myford, C. M., & Wolfe, E. W. (2004). Detecting and measuring rater effects using many-facet Rasch measurement: Part II. Journal of Applied Measurement, 5(2), 189−227.
  • Napoles, J. (2008). Relationships among instructor, peer, and self-evaluations of undergraduate music education majors' micro-teaching experiences. Journal of Research in Music Education, 56(1), 82−91. DOI: 10.1177/0022429408323071
  • Nitko, A.J. (2004). Educational assessment of students. Upper Saddle River, NJ: Pearson.
  • Ounis, M. (2017). A comparison between holistic and analytic assessment of speaking. Journal of Language Teaching and Research, 8(4), 679−690. DOI: 10.17507/jltr.0804.06
  • Petkov, D., & Petkova, O. (2006). Development of scoring rubrics for IS projects as an assessment tool. Issues in Informing Science and Information Technology, 3, 499−510. DOI: 10.28945/910
  • Prins, F. J., Sluijsmans, D. M. A., Kirschner, P. A., & Strijbos, J. W. (2005). Formative peer assessment in CSCL environment: a case study. Assessment & Evaluation in Higher Education, 30(4), 417−444. DOI: 10.1080/02602930500099219
  • Reddy, M. Y. (2010). Design and development of rubrics to improve assessment outcomes. A pilot study in a master's level business program in India. Quality Assurance in Education, 19(1), 84−104. DOI: 10.1108/09684881111107771
  • Sluijmans, D. M. A., Brand-Gruwel, S., van Merrienboer, J. J. G., & Bastiaens, T. J. (2003). The training of peer assessment skills to promote the development of reflections skills in teacher education. Studies in Educational Evaluation, 29(1), 23−42. DOI: 10.1016/S0191-491X(03)90003-4
  • Smith, H., Cooper, A., & Lancaster, L. (2002) Improving the quality of undergraduate peer assessment: A case for student and staff development. Innovations in Education and Teaching International, 39(1), 71−81. DOI: 10.1080/13558000110102904
  • Şahin, M. G., Taşdelen Teker, G. & Güler, N. (2016). An analysis of peer assessment through many facet Rasch model. Journal of Education and Practice, 7(32), 172−181.
  • Şahin, S. (2008). An application of peer assessment in higher education. The Turkish Online Journal of Educational Technology, 7(2), 5−10.
  • Şaşmaz Ören, F. (2018). Self, peer and teacher assessments: What is the level of relationship between them? European Journal of Education Studies, 4(7), 1−19. DOI: 10.5281/zenodo.1249959
  • Tan, Ş. (2015). Eğitimde ölçme ve değerlendirme KPSS el kitabı [Measurement and evaluation in education: KPSS handbook]. Ankara: Pegem.
  • Taşdelen Teker, G., Şahin, G., & Baytemir, K. (2016). Using generalizability theory to investigate the reliability of peer assessment. Journal of Human Sciences, 13(3), 5574−5586. DOI: 10.14687/jhs.v13i3.4155
  • Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249−276. DOI: 10.3102/00346543068003249
  • Topping, K. J. (2009) Peer assessment. Theory into Practice, 48(1), 20−27. DOI: 10.1080/00405840802577569
  • Topping, K. J., Smith, E. F., Swanson, I., & Elliot, A. (2000). Formative peer assessment of academic writing between postgraduate students. Assessment & Evaluation in Higher Education, 25(2), 149−169. DOI: 10.1080/713611428
  • van den Berg, I., Admiraal, W., & Pilot, A. (2006). Designing student peer assessment in higher education: analysis of written and oral peer feedback. Teaching in Higher Education, 11(2), 135−147. DOI: 10.1080/13562510500527685
  • Wen, M. L., & Tsai, C. C. (2006). University students’ perceptions of and attitudes toward (online) peer assessment. Higher Education, 51(1), 27−44. DOI: 10.1007/s10734-004-6375-8
  • Wiseman, C. S. (2008). Investigating selected facets in measuring second language writing ability using holistic and analytic scoring methods (Unpublished doctoral dissertation). Columbia University. New York.
  • Wiseman, C. S. (2012). Comparison of the performance of analytic vs. holistic scoring rubrics to assess L2 writing. Iranian Journal of Language Testing, 2(1), 59−92.
  • Wright, B. D., & Linacre, J. M. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8, 370–371.
  • Yakar, L. (2019). Tamamlayıcı ölçme ve değerlendirme teknikleri III [Complementary measurement and evaluation techniques]. N. Doğan (Eds.), in Eğitimde Ölçme ve Değerlendirme [Measurement and Evaluation in Education] (pp. 245−270). Ankara: Pegem.
  • Yune, S. J., Lee, S. Y., Im, S. J., Kam, B. S., & Baek, S. Y. (2018). Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students. BMC Medical Education, 18(1), 1−6. DOI: 10.1186/s12909-018-1228-9

An analysis of scoring via analytic rubric and general impression in peer assessment

Year 2019, Volume: 8 Issue: 4, 258 - 275, 31.10.2019
https://doi.org/10.19128/turje.609073

Abstract

The aim of this research
was to analyze and compare analytic rubric and general impression scoring in
peer assessment. A total of 66 university students participated in the study
and six of them were chosen as peer raters on a voluntary basis. In the
research, students were supposed to prepare a sample study within the scope of
scientific research methods course and were also expected to present their
studies in class. While the students were giving a presentation, their course
instructor and peer raters conducted scoring, firstly by using the analytic
rubric and subsequently by using general impressions. Collected data were
analyzed using the Rasch model. Consequently, it was found that students were
distinguished from one another at a highly reliable rate using both scoring
methods. Additionally, it was discovered that the differences between students’
ability levels were better revealed when analytic rubric was used. It was
ascertained that there was a high level positive correlation between the
ability estimations obtained from the scores performed by the peers and the
instructor, regardless of the scoring method used. Finally, it was determined
that ability estimations, corresponding peer raters’ analytic rubric and
general impression scoring, held a positive and highly strong relation.

References

  • Alharby, E. R. (2006). A comparison between two scoring methods, holistic vs. analytic using two measurement models, generalizability theory and the many facet Rasch measurement within the context of performance assessment. (Unpublished doctoral dissertation). Pennsylvania State University, Pennsylvania.
  • Alzaid, J. M. (2017). The effect of peer assessment on the evaluation process of students. International Education Studies, 10(6), 159–173. DOI: 10.5539/ies.v10n6p159
  • Amo, E., & Jareno, F. (2011). Self, peer and teacher assessment as active learning methods. Research Journal of International Studies, 18, 41–47.
  • Anita, H. (2011). The connection between analytic and holistic approaches to scoring in the writing component of the PROFEX EMP exam. Acta Medica Marisiensis, 57(3), 206–208.
  • Ashraf, H., & Mahdinezhad, M. (2015). The role of peer-assessment versus self-assessment in promoting autonomy in language use: A case of EFL learners. Iranian Journal of Language Testing, 5(2), 110–120.
  • Berry, R. (2008). Assessment for learning. Hong Kong: Hong Kong University Press.
  • Bostock, S. (2000). Student peer assessment. Retrieved from https://www.reading.ac.uk/web/files/engageinassessment/student_peer_assessment_-_stephen_bostock.pdf
  • Boud, D. (2013). Introduction: making the move to peer learning. Boud, D., Cohen, R., & Sampson, J. (Eds.), in Peer learning in higher education learning from & with each other (pp. 1–18). New York: Routledge.
  • Chi, E. (2001). Comparing holistic and analytic scoring for performance assessment with many–facet Rasch model. Journal of Applied Measurement, 2(4), 379–388.
  • Çetin, B., & Kelecioğlu, H. (2004). The relation between scores predicted from structured features of essay and scores based on scoring key and overall impression in essay type examinations. Hacettepe University Journal of Education, 26, 19–26.
  • Falchikov, N. (2001). Learning together: Peer tutoring in higher education. London: Routledge Falmer.
  • Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322. DOI: 10.3102/00346543070003287
  • Frankland, S. (2007). Peer assessments among students in a problem based learning format. In S. Frankland (Eds.), Enhancing teaching and learning through assessment: Developing an appropriate model (pp. 144–155). Dordrecht: Springer.
  • Ghalib, T. K., & Al-Hattami, A. A. (2015). Holistic versus analytic evaluation of EFL writing: A case study. English Language Teaching, 8(7), 225–236. DOI: 10.5539/elt.v8n7p225
  • Gravells, A. (2014). The award in education and training. London: Learning Matters.
  • Gronlund, N. E. (1998). Assessment of student achievement. Boston: Allyn & Bacon.
  • Güler, N., İlhan, M., Güneyli, A., & Demir, S. (2017). An evaluation of the psychometric properties of three different forms of Daly and Miller’s writing apprehension test through Rasch analysis. Educational Sciences: Theory & Practice, 17(3), 721–744. DOI: 10.12738/estp.2017.3.0051
  • Hall, E. K., & Salmon, S. J. (2003). Chocolate chip cookies and rubrics helping students understand rubrics in inclusive settings. Teaching Exceptional Children, 35(4), 8–11. DOI: 10.1177/004005990303500401
  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: SAGE Publications, Inc.
  • Hanrahan, S. J., & Isaacs, G. (2001). Assessing self– and peer–assessment: The students' views. Higher Education Research & Development, 20(1), 53–70. DOI: 10.1080/07294360123776
  • Hinkle, D. E., Wiersma, W., & Jurs, S. G. (1979). Applied statistics for the behavioral sciences. Chicago: Rand McNally College.
  • Hunter, D. M., Jones, R. M., Radhawa, B. S. (1996). The use of holistic versus analytic scoring for large–scale assessment of writing. The Canadian Journal of Program Evaluation, 11(2), 61–85.
  • Jönsson, A., & Balan, A. (2018). Analytic or holistic: A study of agreement between different grading models. Practical Assessment, Research & Evaluation, 23(12), 1–11.
  • Kan, A. (2007). An alternative method in the new educational program from the point of performance-based assessment: Rubric scoring scales. Educational Sciences: Theory & Practice, 7(1), 144–152.
  • Karaca, E. (2009). An evaluation of teacher trainees’ opinions of the peer assessment in terms of some variables. World Applied Sciences Journal, 6(1), 123–128.
  • Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26(20), 275–304. DOI: 10.1177/0265532208101008
  • Lee, M., Peterson, J. J., & Dixon, A. (2010). Rasch calibration of physical activity self-efficacy and social support scale for persons with intellectual disabilities. Research in Developmental Disabilities, 31(4), 903−913. DOI: 10.1016/j.ridd.2010.02.010
  • Lester, F. K., Lambdin, D. V., & Preston, R. V. (1997). A new vision of the nature and purposes of the assessment in the mathematics classroom. Phye, G. D. (Eds.) in Handbook of Classroom Assessment: Learning, Adjustment, and Achievement (pp. 287−320). San Diego, CA, US: Academic Press.
  • Linacre, J. M. (2002). Optimizing rating scale category effectiveness. Journal of Applied Measurement, 3(1), 85−106.
  • Linacre, J. M. (2018). A user's guide to FACETS Rasch-model computer programs. Retrieved from https://www.winsteps.com/a/Facets-Manual.pdf
  • Liu, N. F., & Carless, D. (2006). Peer feedback: the learning element of peer assessment. Teaching in Higher Education, 11(3), 279−290. DOI: 10.19080/13562510600680582
  • Macpherson, K. (1999). The development of critical thinking skills in undergraduate supervisory management units: Efficacy of student peer assessment. Assessment & Evaluation in Higher Education, 24(3), 273−284. DOI: 10.1080/0260293990240302
  • Mann, B. L. (2006). Testing the validity of the post and vote model of web-based peer assessment. In D. D. Williams, S. L. Howell, & M. Hricko (Eds.), Online assessment, measurement, and evaluation: Emerging practices (pp. 131−152). Hershey, PA: Information Science.
  • Matsuno, S. (2009). Self-, peer–, and teacher–assessments in Japanese university EFL writing classrooms. Language Testing, 26(1), 75−100. DOI: 10.1177/0265532208097337
  • Mehrdad, N., Bigdelib, S., & Ebrahimia, H. (2012). A comparative study on self, peer and teacher evaluation to evaluate clinical skills of nursing students. Procedia - Social and Behavioral Sciences, 47, 1847−1852.
  • McDonald, B. (2016). Peer assessment that works: A guide for teachers. London: Rowman & Littlefield.
  • McLeod, S. G., Brown, G. C., McDaniels, W., & Sledge, L. (2009). Improving writing with a PAL: harnessing the power of peer assisted learning with the reader’s assessment rubrics. International Journal of Teaching and Learning in Higher Education, 20(3), 488−502.
  • McNamara, T. F. (1996). Measuring second language performance. London and New York: Longman.
  • Moskal, B. M. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research & Evaluation, 7(3), 1-5. Retrieved from: https://pareonline.net/getvn.asp?v=7&n=3
  • Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7(10), 1-6. Retrieved from http://pareonline.net/getvn.asp?v=7&n=10
  • Mutwarasibo, F. (2016). University students’ attitudes towards peer assessment and reactions to peer feedback on group writing. Rwanda Journal, Series A: Arts and Humanities, 1(1), 32−48. DOI: 10.4314/rj.v1i1.4A
  • Myford, C. M., & Wolfe, E. W. (2004). Detecting and measuring rater effects using many-facet Rasch measurement: Part II. Journal of Applied Measurement, 5(2), 189−227.
  • Napoles, J. (2008). Relationships among instructor, peer, and self-evaluations of undergraduate music education majors' micro-teaching experiences. Journal of Research in Music Education, 56(1), 82−91. DOI: 10.1177/0022429408323071
  • Nitko, A.J. (2004). Educational assessment of students. Upper Saddle River, NJ: Pearson.
  • Ounis, M. (2017). A comparison between holistic and analytic assessment of speaking. Journal of Language Teaching and Research, 8(4), 679−690. DOI: 10.17507/jltr.0804.06
  • Petkov, D., & Petkova, O. (2006). Development of scoring rubrics for IS projects as an assessment tool. Issues in Informing Science and Information Technology, 3, 499−510. DOI: 10.28945/910
  • Prins, F. J., Sluijsmans, D. M. A., Kirschner, P. A., & Strijbos, J. W. (2005). Formative peer assessment in CSCL environment: a case study. Assessment & Evaluation in Higher Education, 30(4), 417−444. DOI: 10.1080/02602930500099219
  • Reddy, M. Y. (2010). Design and development of rubrics to improve assessment outcomes. A pilot study in a master's level business program in India. Quality Assurance in Education, 19(1), 84−104. DOI: 10.1108/09684881111107771
  • Sluijmans, D. M. A., Brand-Gruwel, S., van Merrienboer, J. J. G., & Bastiaens, T. J. (2003). The training of peer assessment skills to promote the development of reflections skills in teacher education. Studies in Educational Evaluation, 29(1), 23−42. DOI: 10.1016/S0191-491X(03)90003-4
  • Smith, H., Cooper, A., & Lancaster, L. (2002) Improving the quality of undergraduate peer assessment: A case for student and staff development. Innovations in Education and Teaching International, 39(1), 71−81. DOI: 10.1080/13558000110102904
  • Şahin, M. G., Taşdelen Teker, G. & Güler, N. (2016). An analysis of peer assessment through many facet Rasch model. Journal of Education and Practice, 7(32), 172−181.
  • Şahin, S. (2008). An application of peer assessment in higher education. The Turkish Online Journal of Educational Technology, 7(2), 5−10.
  • Şaşmaz Ören, F. (2018). Self, peer and teacher assessments: What is the level of relationship between them? European Journal of Education Studies, 4(7), 1−19. DOI: 10.5281/zenodo.1249959
  • Tan, Ş. (2015). Eğitimde ölçme ve değerlendirme KPSS el kitabı [Measurement and evaluation in education: KPSS handbook]. Ankara: Pegem.
  • Taşdelen Teker, G., Şahin, G., & Baytemir, K. (2016). Using generalizability theory to investigate the reliability of peer assessment. Journal of Human Sciences, 13(3), 5574−5586. DOI: 10.14687/jhs.v13i3.4155
  • Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249−276. DOI: 10.3102/00346543068003249
  • Topping, K. J. (2009) Peer assessment. Theory into Practice, 48(1), 20−27. DOI: 10.1080/00405840802577569
  • Topping, K. J., Smith, E. F., Swanson, I., & Elliot, A. (2000). Formative peer assessment of academic writing between postgraduate students. Assessment & Evaluation in Higher Education, 25(2), 149−169. DOI: 10.1080/713611428
  • van den Berg, I., Admiraal, W., & Pilot, A. (2006). Designing student peer assessment in higher education: analysis of written and oral peer feedback. Teaching in Higher Education, 11(2), 135−147. DOI: 10.1080/13562510500527685
  • Wen, M. L., & Tsai, C. C. (2006). University students’ perceptions of and attitudes toward (online) peer assessment. Higher Education, 51(1), 27−44. DOI: 10.1007/s10734-004-6375-8
  • Wiseman, C. S. (2008). Investigating selected facets in measuring second language writing ability using holistic and analytic scoring methods (Unpublished doctoral dissertation). Columbia University. New York.
  • Wiseman, C. S. (2012). Comparison of the performance of analytic vs. holistic scoring rubrics to assess L2 writing. Iranian Journal of Language Testing, 2(1), 59−92.
  • Wright, B. D., & Linacre, J. M. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8, 370–371.
  • Yakar, L. (2019). Tamamlayıcı ölçme ve değerlendirme teknikleri III [Complementary measurement and evaluation techniques]. N. Doğan (Eds.), in Eğitimde Ölçme ve Değerlendirme [Measurement and Evaluation in Education] (pp. 245−270). Ankara: Pegem.
  • Yune, S. J., Lee, S. Y., Im, S. J., Kam, B. S., & Baek, S. Y. (2018). Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students. BMC Medical Education, 18(1), 1−6. DOI: 10.1186/s12909-018-1228-9
There are 65 citations in total.

Details

Primary Language English
Subjects Other Fields of Education
Journal Section Research Articles
Authors

Nagihan Boztunç Öztürk 0000-0002-2777-5311

Melek Gülşah Şahin 0000-0001-5139-9777

Mustafa İlhan 0000-0003-1804-002X

Publication Date October 31, 2019
Acceptance Date October 14, 2019
Published in Issue Year 2019 Volume: 8 Issue: 4

Cite

APA Boztunç Öztürk, N., Şahin, M. G., & İlhan, M. (2019). An analysis of scoring via analytic rubric and general impression in peer assessment. Turkish Journal of Education, 8(4), 258-275. https://doi.org/10.19128/turje.609073

Turkish Journal of Education is licensed under CC BY-NC 4.0