Araştırma Makalesi
BibTex RIS Kaynak Göster

Coping with Unbalanced Designs of Generalizability Theory: G String V

Yıl 2019, , 57 - 69, 30.12.2019
https://doi.org/10.21449/ijate.658747

Öz

The aim of this paper is to introduce a software that is appropriate for the generalizability theory for not only balanced but also unbalanced data sets. Because it is possible to have unbalanced data sets while conducting a study, the researchers have devised an easy solution, other than deleting data, to balance the design to cope with this situation. Thus, the software G String V will be introduced. First, the generalizability theory will be reviewed, followed by a description of the unbalanced synthetic data that was used to conduct the analysis using the software. Explanations are provided for installing the software, preparation of the data, and the step-by-step data analysis. Moreover, the interpretation of the data is also explained. Finally, the limitations of the software are shared.

Kaynakça

  • Atılgan, H. (2004). Genellenebilirlik kuramı ve cok değişkenlik kaynaklı Rasch modelinin karşılaştırılmasına ilişkin bir araştırma [A research on the comparison of the generalizability theory and many facet Rasch model] (Doctoral Dissertation). Hacettepe University, Ankara.
  • Bloch, R. & Norman, G. (2018). G String V User Manual. Hamilton, Ontario, Canada.
  • Bloch, R. & Norman, G. (2015). G String IV (Version 6.1.1) User Manual. Hamilton, Ontario, Canada.
  • Bloch, R. & Norman, G. (2012). Generalizability theory for the perplexed: A practical introduction and guide: AMEE Guide No. 68. Medical Teacher, 34 (11), 960-992. DOI: 10.3109/0142159X.2012.703791
  • Brennan, R. L. (2001a). Generalizability Theory. New York: Springer.
  • Brennan, R. L. (2001b). Manual for urGENOVA (Version 2.1) (Iowa Testing Programs Occasional Paper Number 49). Iowa City, IA: Iowa Testing Programs, University of Iowa.
  • Brennan, R. L. (2000). Performance Assessments from the Perspective of Generalizability Theory. Applied Psychological Measurement, 24(4), 339-353.
  • Brennan. R. L., & Kane, M. T. (1977). An index of dependability for mastery tests. Journal of Educational Measurement, 14, 277-289.
  • Cardinet, J., Johnson, S. & Pini, G. (2010). Applying Generalizability Theory using EduG. New York, NY: Routledge – Taylor & Francis Group.
  • Cardinet, J., Tourneur, Y. & Allal, L. (1981). Extension of Generalizability Theory and Its Applications in Educational Measurement. Journal of Educational Measurement, 18 (4), 183-204.
  • Cardinet, J., Tourneur, Y. & Allal, L. (1976). The Symmetry of Generalizability Theory: Applications to Educational Measurement. Journal of Educational Measurement, 13 (2), 119-135.
  • Chiu, C. W. T. (2001). Scoring performance assessments based on judgments: Generalizability theory. Boston, MA: Kluwer Academic.
  • Cronbach, L. J., Gleser, G. C., Nanda, H. & Rajaratnam, N. (1972). The Dependability of Behavioral Measurements: Theory of Generalizability for Scores and Profiles. New York: Wiley.
  • Furr, R. M. (2011). Scale construction and psychometrics for social and personality psychology. Thousand Oaks, CA: Sage Publications Ltd.
  • Güler, N., Kaya Uyanık, G. & Taşdelen Teker, G. (2012). Genellenebilirlik Kuramı [Generalizability Theory]. Ankara: PegemA Yayıncılık.
  • Rios, J.A., Li, X., & Faulkner-Bond, M. (2012, October). A review of methodological trends in generalizability theory. Paper presented at the annual conference of the Northeastern Educational Research Association, Rocky Hill, CT.
  • Shavelson, J. R. & Webb, N. M. (2006). Generalizability theory. In: Green, J.L., Camill, G., Elmore, P.B., editors. Handbook of complementary methods in education research. Mahwah: Lawrence Erlbaum Associates Publishers, p. 309–322.
  • Shavelson, J. R. & Webb, N. M. (1991). Generalizability Theory: A Primer. Newbury Park. CA: Sage Publications.
  • Shavelson, R.J., Webb, N.M., & Rowley, G.L. (1989). Generalizability theory. American Psychologist, 44(6), 922-932.
  • Shavelson, R. J., & Webb, N. M. (1981). Generalizability theory: 1973–1980. British Journal of Mathematical and Statistical Psychology, 34, 133–166.
  • Suen, H. K. & Lei, P.W. (2007). Classical Versus Generalizability Theory of Measurement. Educational Measurement, 4, 1-13.
  • Taşdelen Teker, G. & Güler, N. (2019). Thematic Content Analysis of Studies Using Generalizability Theory. International Journal of Assessment Tools in Education, 6(2), 279–299. https://dx.doi.org/10.21449/ijate.569996
  • Webb, N. M., Shavelson, R. J. & Haertel, E. H. (2006). Reliability Coefficients and Generalizability Theory. Handbook of Statistics, 26, 81-124. DOI: 10.1016/S0169-7161(06)26004

Coping with Unbalanced Designs of Generalizability Theory: G String V

Yıl 2019, , 57 - 69, 30.12.2019
https://doi.org/10.21449/ijate.658747

Öz

The aim of this paper is to introduce a software
that is appropriate for the generalizability theory for not only balanced but
also unbalanced data sets. Because it is possible to have unbalanced data sets
while conducting a study, the researchers have devised an easy solution, other
than deleting data, to balance the design to cope with this situation. Thus,
the software G String V will be introduced. First, the generalizability theory
will be reviewed, followed by a description of the unbalanced synthetic data
that was used to conduct the analysis using the software. Explanations are
provided for installing the software, preparation of the data, and the
step-by-step data analysis. Moreover, the interpretation of the data is also
explained. Finally, the limitations of the software are shared.

Kaynakça

  • Atılgan, H. (2004). Genellenebilirlik kuramı ve cok değişkenlik kaynaklı Rasch modelinin karşılaştırılmasına ilişkin bir araştırma [A research on the comparison of the generalizability theory and many facet Rasch model] (Doctoral Dissertation). Hacettepe University, Ankara.
  • Bloch, R. & Norman, G. (2018). G String V User Manual. Hamilton, Ontario, Canada.
  • Bloch, R. & Norman, G. (2015). G String IV (Version 6.1.1) User Manual. Hamilton, Ontario, Canada.
  • Bloch, R. & Norman, G. (2012). Generalizability theory for the perplexed: A practical introduction and guide: AMEE Guide No. 68. Medical Teacher, 34 (11), 960-992. DOI: 10.3109/0142159X.2012.703791
  • Brennan, R. L. (2001a). Generalizability Theory. New York: Springer.
  • Brennan, R. L. (2001b). Manual for urGENOVA (Version 2.1) (Iowa Testing Programs Occasional Paper Number 49). Iowa City, IA: Iowa Testing Programs, University of Iowa.
  • Brennan, R. L. (2000). Performance Assessments from the Perspective of Generalizability Theory. Applied Psychological Measurement, 24(4), 339-353.
  • Brennan. R. L., & Kane, M. T. (1977). An index of dependability for mastery tests. Journal of Educational Measurement, 14, 277-289.
  • Cardinet, J., Johnson, S. & Pini, G. (2010). Applying Generalizability Theory using EduG. New York, NY: Routledge – Taylor & Francis Group.
  • Cardinet, J., Tourneur, Y. & Allal, L. (1981). Extension of Generalizability Theory and Its Applications in Educational Measurement. Journal of Educational Measurement, 18 (4), 183-204.
  • Cardinet, J., Tourneur, Y. & Allal, L. (1976). The Symmetry of Generalizability Theory: Applications to Educational Measurement. Journal of Educational Measurement, 13 (2), 119-135.
  • Chiu, C. W. T. (2001). Scoring performance assessments based on judgments: Generalizability theory. Boston, MA: Kluwer Academic.
  • Cronbach, L. J., Gleser, G. C., Nanda, H. & Rajaratnam, N. (1972). The Dependability of Behavioral Measurements: Theory of Generalizability for Scores and Profiles. New York: Wiley.
  • Furr, R. M. (2011). Scale construction and psychometrics for social and personality psychology. Thousand Oaks, CA: Sage Publications Ltd.
  • Güler, N., Kaya Uyanık, G. & Taşdelen Teker, G. (2012). Genellenebilirlik Kuramı [Generalizability Theory]. Ankara: PegemA Yayıncılık.
  • Rios, J.A., Li, X., & Faulkner-Bond, M. (2012, October). A review of methodological trends in generalizability theory. Paper presented at the annual conference of the Northeastern Educational Research Association, Rocky Hill, CT.
  • Shavelson, J. R. & Webb, N. M. (2006). Generalizability theory. In: Green, J.L., Camill, G., Elmore, P.B., editors. Handbook of complementary methods in education research. Mahwah: Lawrence Erlbaum Associates Publishers, p. 309–322.
  • Shavelson, J. R. & Webb, N. M. (1991). Generalizability Theory: A Primer. Newbury Park. CA: Sage Publications.
  • Shavelson, R.J., Webb, N.M., & Rowley, G.L. (1989). Generalizability theory. American Psychologist, 44(6), 922-932.
  • Shavelson, R. J., & Webb, N. M. (1981). Generalizability theory: 1973–1980. British Journal of Mathematical and Statistical Psychology, 34, 133–166.
  • Suen, H. K. & Lei, P.W. (2007). Classical Versus Generalizability Theory of Measurement. Educational Measurement, 4, 1-13.
  • Taşdelen Teker, G. & Güler, N. (2019). Thematic Content Analysis of Studies Using Generalizability Theory. International Journal of Assessment Tools in Education, 6(2), 279–299. https://dx.doi.org/10.21449/ijate.569996
  • Webb, N. M., Shavelson, R. J. & Haertel, E. H. (2006). Reliability Coefficients and Generalizability Theory. Handbook of Statistics, 26, 81-124. DOI: 10.1016/S0169-7161(06)26004
Toplam 23 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Eğitim Üzerine Çalışmalar
Bölüm Special Issue
Yazarlar

Gülşen Taşdelen Teker 0000-0003-3434-4373

Yayımlanma Tarihi 30 Aralık 2019
Gönderilme Tarihi 28 Ekim 2019
Yayımlandığı Sayı Yıl 2019

Kaynak Göster

APA Taşdelen Teker, G. (2019). Coping with Unbalanced Designs of Generalizability Theory: G String V. International Journal of Assessment Tools in Education, 6(5), 57-69. https://doi.org/10.21449/ijate.658747

23823             23825             23824