BibTex RIS Kaynak Göster

Tek Boyutluluktan Çok Boyutluluğa Boyutluluğun Çerçevesi

Yıl 2022, Cilt: 35 Sayı: 3, 516 - 531, 22.12.2022
https://doi.org/10.19171/uefad.1143164

Öz

Boyutluluk kavramı geçmişten günümüze tartışılagelen bir konu olmuştur. Test puanlarının boyutluluğu hakkında en genel anlayış evrende bir testi alanlar arasında testle ilgili tüm farklılıkları tam olarak tanımlamak için gereken minimum boyut veya ölçülen yapıyla ilişkili istatistiksel yetenek sayısının boyut sayısını oluşturduğudur. Bir testin boyutluluğu sadece test maddelerine bağlı değildir. Aynı zamanda, evrende testi alanların maddelerle etkileşimi de testin boyutluluğuna kaynaklık etmektedir. Eğer boyutluluk değerlendirmelerinde test puanları güçlü bir çok boyutluluk gösterirse, diğer bir ifadeyle, bazı boyutların istatistiksel olarak birbirinden bağımsız olduğu görülürse, testin kapsam genişliği değişmeyecek şekilde daha homojen iki veya daha fazla alt test oluşturulması çözüm olabilir. Buna rağmen, birçok test planında maddelerin mantıksal açıdan birbirinden bağımsız olması bir gereklilik olarak görülür. Test geliştiriciler, bazı maddeler arasındaki mantıksal bağlılığın bazı karmaşık yeterliklerin ölçülebilmesi için gerekli olduğunu düşünmektedir. Test puanlarının kesinliği ve doğruluğunun sağlanabilmesi için bu tür mantıksal açıdan birbiriyle ilişkili maddelerin tek bir madde gibi puanlanması gerekir. Eğer bu maddeler bağımsız puanlanacaksa maddeler arası koşullu kovaryansların incelenmesine dayanan istatistiksel teknikler kullanılarak bu maddelerin gerçekten bağımsız bilgi sağlayıp sağlamadığına karar verilmeli ve en azından bazı maddeler puanlamada birleştirilmelidir. Çok boyutluluk ve tek boyutlulukla ilgili net bir ayrım olmamakla birlikte çok boyutluluğun planlanan test yapısından mı yoksa yapıdan bağımsız istenmeyen faktörlerden mi ortaya çıktığı incelenmelidir. Boyutluluğun belirlenmesinde birçok yöntem bulunmaktadır. Bu araştırma kapsamında boyutluluğun belirlenmesinde Paralel analiz ve Velicer’in MAP testinin kullanımına ilişkin bir örnek sunulmuştur.

Kaynakça

  • Ackerman, T. A. (1994). Using multidimensional item response theory to understand what items and tests are measuring. Applied Measurement in Education, 7(4), 255-278. https://doi.org/10.1207/s15324818ame0704_1
  • Armor, D. J. (1973). Theta reliability and factor scaling. Sociological methodology, 5, 17-50. https://doi.org/10.2307/270831
  • Cortina, J., M. (1993). What is coefficient Alpha? An Examination of theory and applications. Journal of Applied Psychology, 78(1), 98-104. https://doi.org/10.1037/0021-9010.78.1.98
  • Dinno, A. (2009). Exploring the sensitivity of Horn’s paralel analysis to the distributional form of random data. Multivariate Behavioral Research, 44, 362-388. https://doi.org/10.1080/00273170902938969
  • Dinno, A. (2018). Horn's test of principal components/factors. Package Paran.
  • Drasgow, F., & Parsons, C. (1983). Applications of unidimensional item response theory models to multidimensional data. Applied Psychological Measurement, 7, 189-199. https://doi.org/10.1177/014662168300700207
  • Enders, C. K., & Bandalos, D. L. (1999). The effects of heterogeneous item distributions on reliability. Applied Measurement in Education, 12(2), 133-150. https://doi.org/10.1207/s15324818ame1202_2
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C. & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. https://doi.org/10.1037/1082-989X.4.3.272
  • Feldt, L. S., & Qualls, A. L. (1996). Bias in coefficient alpha arising from heterogeneity of test content. Applied Measurement in Education, 9(3), 277-286. https://doi.org/10.1207/s15324818ame0903_5
  • Ford, J. K., MacCallum, R. C. & Tait, M. (1986). The applications of exploratory factor analysis in applied psychology: A critical review and analysis. Personnel Psychology, 39, 291-314. https://doi.org/10.1111/j.1744-6570.1986.tb00583.x
  • Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an index of test unidimensionality. Educational and Psychological Measurement, 37(4), 827-838. https://doi.org/10.1177/001316447703700403
  • Gulliksen, H. (2013). Theory of mental tests. Routledge.
  • Hattie, J. (1985). Methodology review: assessing unidimensionality of tests and items. Applied psychological measurement, 9(2), 139-164. https://doi.org/10.1177/014662168500900204
  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrica, 30(2), 179-185.
  • Lord, F. M. & Novick M. R. (2008). Statistical theories of mental test scores. Addison-Wesley Publishing Company.
  • Luecht, R. M., & Miller, T. R. (1992). Unidimensional calibrations and interpretations of composite traits for multidimensional tests. Applied Psychological Measurement, 16(3), 279-293. https://doi.org/10.1177/014662169201600
  • McDonald, R. P. (1981). The dimensionality of tests and items. British Journal of Mathematical and Statistical Psychology, 34, 100-117. https://doi.org/10.1111/j.2044-8317.1981.tb00621.x
  • McDonald, R. P. (1999). Test theory: A unified treatment. Lawrence Erlbaum Associates.
  • McNemar, Q. (1946). Opinion-attitude methodology. Psychological Bulletin, 43(4), 289–374. https://doi.org/10.1037/h0060985
  • Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). McGraw-Hill.
  • O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments, & Computers, 32(3), 396-402.
  • O'Connor, B. P. (2022). Exploratory factor analysis functions for assessing dimensionality. Package EFA dimensions.
  • Reckase, M. D. (1979). Unifactor latent trait models applied to multifactor tests: Results and implications. Journal of Educational Statistics, 4(3), 207-230. https://doi.org/10.2307/1164671
  • Reckase, M. D., Ackerman, T. A., & Carlson, J. E. (1988). Building a unidimensional test using multidimensional items. Journal of Educational Measurement, 25(3), 193-203. https://doi.org/10.1111/j.1745-3984.1988.tb00302.x
  • Revelle, W. (2007). Determining the number of factors: The example of the NEO-PI-R. http://personality-project.org/r/book/numberoffactors.pdf.
  • Silverstein, A. B. (1977). Comparison of two criteria for determining the number of factors. Psychological Reports, 41, 387-390. https://doi.org/10.2466/pr0.1977.41.2.387
  • Silverstein, A. B. (1987). Note on the paralel analysis criterion for determining the number of common factor or principal components. Psychological Reports, 61, 351-354. https://doi.org/10.2466/pr0.1987.61.2.351
  • Stout, W. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52(4), 589-617. https://doi.org/10.1007/bf02294821
  • Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55, 293- 325. https://doi.org/10.21236/ada207301
  • Tate, R. (2012). Test dimensionality. In Tindal, G., & Haladyna, T. M. (Eds.), Large-scale assessment programs for all students: Validity, technical adequacy, and implementation (pp. 181-213). Routledge.
  • Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27(3), 159-203. https://doi.org/10.1177/0146621603027003
  • Thurstone, L. L. (1931). The measurement of social attitudes. The Journal of Abnormal and Social Psychology, 26(3), 249–269. https://doi.org/10.1037/h0070363
  • Velicer, W. F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 41, 321-327. https://doi.org/10.1007/bf02293557
  • Whitely, S. E., & Dawis, R. V. (1974). The nature of objectivity with the Rasch model. Journal of Educational Measurement, 11(3), 163-178. https://doi.org/10.1111/j.17453984.1974.tb00988.x
  • Zwick, W. R. & Velicer, W. F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99(3), 432-442. https://doi.org/10.1037/0033-2909.99.3.432

The Framework of Dimensionality from Unidimensionality to Multidimensionality

Yıl 2022, Cilt: 35 Sayı: 3, 516 - 531, 22.12.2022
https://doi.org/10.19171/uefad.1143164

Öz

The concept of dimensionality has always been a controversial subject. The most general understanding of the dimensionality of a test is the minimum number of dimensions or statistical abilities required to fully describe all test-related differences in the universe between test takers. The dimensionality of a test depends on both the test items and the result of the interaction between the test takers and test items in the main population. If a strong multidimensionality is detected in dimensionality assessments, that is, if some dimensions are found to be almost statistically independent, the solution may be to create two or more homogeneous subtests without changing the content of the test. However, in many test plans it is seen as a requirement that items be logically independent of each other. Test developers consider that logically interrelated items are necessary to measure complex abilities. These items should be scored as a single item to ensure the precision and accuracy of test scores in such logically interrelated items. If these items are to be scored independently, it should be decided whether these items provide truly independent information using empirical methods based on the conditional item covariance, and at least some items should be combined in scoring. Although there is no clear distinction between multidimensionality and unidimensionality, it should be examined whether multidimensionality arises from the planned test structure or from the undesirable factors irrelevant targetted construct. There are many ways assessing dimensionality. In the scope of this research, Velicer’s MAP test and parallel analysis were used to assess dimensionality.

Kaynakça

  • Ackerman, T. A. (1994). Using multidimensional item response theory to understand what items and tests are measuring. Applied Measurement in Education, 7(4), 255-278. https://doi.org/10.1207/s15324818ame0704_1
  • Armor, D. J. (1973). Theta reliability and factor scaling. Sociological methodology, 5, 17-50. https://doi.org/10.2307/270831
  • Cortina, J., M. (1993). What is coefficient Alpha? An Examination of theory and applications. Journal of Applied Psychology, 78(1), 98-104. https://doi.org/10.1037/0021-9010.78.1.98
  • Dinno, A. (2009). Exploring the sensitivity of Horn’s paralel analysis to the distributional form of random data. Multivariate Behavioral Research, 44, 362-388. https://doi.org/10.1080/00273170902938969
  • Dinno, A. (2018). Horn's test of principal components/factors. Package Paran.
  • Drasgow, F., & Parsons, C. (1983). Applications of unidimensional item response theory models to multidimensional data. Applied Psychological Measurement, 7, 189-199. https://doi.org/10.1177/014662168300700207
  • Enders, C. K., & Bandalos, D. L. (1999). The effects of heterogeneous item distributions on reliability. Applied Measurement in Education, 12(2), 133-150. https://doi.org/10.1207/s15324818ame1202_2
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C. & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. https://doi.org/10.1037/1082-989X.4.3.272
  • Feldt, L. S., & Qualls, A. L. (1996). Bias in coefficient alpha arising from heterogeneity of test content. Applied Measurement in Education, 9(3), 277-286. https://doi.org/10.1207/s15324818ame0903_5
  • Ford, J. K., MacCallum, R. C. & Tait, M. (1986). The applications of exploratory factor analysis in applied psychology: A critical review and analysis. Personnel Psychology, 39, 291-314. https://doi.org/10.1111/j.1744-6570.1986.tb00583.x
  • Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an index of test unidimensionality. Educational and Psychological Measurement, 37(4), 827-838. https://doi.org/10.1177/001316447703700403
  • Gulliksen, H. (2013). Theory of mental tests. Routledge.
  • Hattie, J. (1985). Methodology review: assessing unidimensionality of tests and items. Applied psychological measurement, 9(2), 139-164. https://doi.org/10.1177/014662168500900204
  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrica, 30(2), 179-185.
  • Lord, F. M. & Novick M. R. (2008). Statistical theories of mental test scores. Addison-Wesley Publishing Company.
  • Luecht, R. M., & Miller, T. R. (1992). Unidimensional calibrations and interpretations of composite traits for multidimensional tests. Applied Psychological Measurement, 16(3), 279-293. https://doi.org/10.1177/014662169201600
  • McDonald, R. P. (1981). The dimensionality of tests and items. British Journal of Mathematical and Statistical Psychology, 34, 100-117. https://doi.org/10.1111/j.2044-8317.1981.tb00621.x
  • McDonald, R. P. (1999). Test theory: A unified treatment. Lawrence Erlbaum Associates.
  • McNemar, Q. (1946). Opinion-attitude methodology. Psychological Bulletin, 43(4), 289–374. https://doi.org/10.1037/h0060985
  • Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). McGraw-Hill.
  • O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments, & Computers, 32(3), 396-402.
  • O'Connor, B. P. (2022). Exploratory factor analysis functions for assessing dimensionality. Package EFA dimensions.
  • Reckase, M. D. (1979). Unifactor latent trait models applied to multifactor tests: Results and implications. Journal of Educational Statistics, 4(3), 207-230. https://doi.org/10.2307/1164671
  • Reckase, M. D., Ackerman, T. A., & Carlson, J. E. (1988). Building a unidimensional test using multidimensional items. Journal of Educational Measurement, 25(3), 193-203. https://doi.org/10.1111/j.1745-3984.1988.tb00302.x
  • Revelle, W. (2007). Determining the number of factors: The example of the NEO-PI-R. http://personality-project.org/r/book/numberoffactors.pdf.
  • Silverstein, A. B. (1977). Comparison of two criteria for determining the number of factors. Psychological Reports, 41, 387-390. https://doi.org/10.2466/pr0.1977.41.2.387
  • Silverstein, A. B. (1987). Note on the paralel analysis criterion for determining the number of common factor or principal components. Psychological Reports, 61, 351-354. https://doi.org/10.2466/pr0.1987.61.2.351
  • Stout, W. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52(4), 589-617. https://doi.org/10.1007/bf02294821
  • Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55, 293- 325. https://doi.org/10.21236/ada207301
  • Tate, R. (2012). Test dimensionality. In Tindal, G., & Haladyna, T. M. (Eds.), Large-scale assessment programs for all students: Validity, technical adequacy, and implementation (pp. 181-213). Routledge.
  • Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27(3), 159-203. https://doi.org/10.1177/0146621603027003
  • Thurstone, L. L. (1931). The measurement of social attitudes. The Journal of Abnormal and Social Psychology, 26(3), 249–269. https://doi.org/10.1037/h0070363
  • Velicer, W. F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 41, 321-327. https://doi.org/10.1007/bf02293557
  • Whitely, S. E., & Dawis, R. V. (1974). The nature of objectivity with the Rasch model. Journal of Educational Measurement, 11(3), 163-178. https://doi.org/10.1111/j.17453984.1974.tb00988.x
  • Zwick, W. R. & Velicer, W. F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99(3), 432-442. https://doi.org/10.1037/0033-2909.99.3.432

The Framework of Dimensionality from Unidimensionality to Multidimensionality

Yıl 2022, Cilt: 35 Sayı: 3, 516 - 531, 22.12.2022
https://doi.org/10.19171/uefad.1143164

Öz

The concept of dimensionality has always been a controversial subject. The most general understanding of the dimensionality of a test is the minimum number of dimensions or statistical abilities required to fully describe all test-related differences in the universe between test takers. The dimensionality of a test depends on both the test items and the result of the interaction between the test takers and test items in the main population. If a strong multidimensionality is detected in dimensionality assessments, that is, if some dimensions are found to be almost statistically independent, the solution may be to create two or more homogeneous subtests without changing the content of the test. However, in many test plans it is seen as a requirement that items be logically independent of each other. Test developers consider that logically interrelated items are necessary to measure complex abilities. These items should be scored as a single item to ensure the precision and accuracy of test scores in such logically interrelated items. If these items are to be scored independently, it should be decided whether these items provide truly independent information using empirical methods based on the conditional item covariance, and at least some items should be combined in scoring. Although there is no clear distinction between multidimensionality and unidimensionality, it should be examined whether multidimensionality arises from the planned test structure or from the undesirable factors irrelevant targetted construct. There are many ways assessing dimensionality. In the scope of this research, Velicer’s MAP test and parallel analysis were used to assess dimensionality.

Kaynakça

  • Ackerman, T. A. (1994). Using multidimensional item response theory to understand what items and tests are measuring. Applied Measurement in Education, 7(4), 255-278. https://doi.org/10.1207/s15324818ame0704_1
  • Armor, D. J. (1973). Theta reliability and factor scaling. Sociological methodology, 5, 17-50. https://doi.org/10.2307/270831
  • Cortina, J., M. (1993). What is coefficient Alpha? An Examination of theory and applications. Journal of Applied Psychology, 78(1), 98-104. https://doi.org/10.1037/0021-9010.78.1.98
  • Dinno, A. (2009). Exploring the sensitivity of Horn’s paralel analysis to the distributional form of random data. Multivariate Behavioral Research, 44, 362-388. https://doi.org/10.1080/00273170902938969
  • Dinno, A. (2018). Horn's test of principal components/factors. Package Paran.
  • Drasgow, F., & Parsons, C. (1983). Applications of unidimensional item response theory models to multidimensional data. Applied Psychological Measurement, 7, 189-199. https://doi.org/10.1177/014662168300700207
  • Enders, C. K., & Bandalos, D. L. (1999). The effects of heterogeneous item distributions on reliability. Applied Measurement in Education, 12(2), 133-150. https://doi.org/10.1207/s15324818ame1202_2
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C. & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272-299. https://doi.org/10.1037/1082-989X.4.3.272
  • Feldt, L. S., & Qualls, A. L. (1996). Bias in coefficient alpha arising from heterogeneity of test content. Applied Measurement in Education, 9(3), 277-286. https://doi.org/10.1207/s15324818ame0903_5
  • Ford, J. K., MacCallum, R. C. & Tait, M. (1986). The applications of exploratory factor analysis in applied psychology: A critical review and analysis. Personnel Psychology, 39, 291-314. https://doi.org/10.1111/j.1744-6570.1986.tb00583.x
  • Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of coefficient alpha as an index of test unidimensionality. Educational and Psychological Measurement, 37(4), 827-838. https://doi.org/10.1177/001316447703700403
  • Gulliksen, H. (2013). Theory of mental tests. Routledge.
  • Hattie, J. (1985). Methodology review: assessing unidimensionality of tests and items. Applied psychological measurement, 9(2), 139-164. https://doi.org/10.1177/014662168500900204
  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrica, 30(2), 179-185.
  • Lord, F. M. & Novick M. R. (2008). Statistical theories of mental test scores. Addison-Wesley Publishing Company.
  • Luecht, R. M., & Miller, T. R. (1992). Unidimensional calibrations and interpretations of composite traits for multidimensional tests. Applied Psychological Measurement, 16(3), 279-293. https://doi.org/10.1177/014662169201600
  • McDonald, R. P. (1981). The dimensionality of tests and items. British Journal of Mathematical and Statistical Psychology, 34, 100-117. https://doi.org/10.1111/j.2044-8317.1981.tb00621.x
  • McDonald, R. P. (1999). Test theory: A unified treatment. Lawrence Erlbaum Associates.
  • McNemar, Q. (1946). Opinion-attitude methodology. Psychological Bulletin, 43(4), 289–374. https://doi.org/10.1037/h0060985
  • Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). McGraw-Hill.
  • O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments, & Computers, 32(3), 396-402.
  • O'Connor, B. P. (2022). Exploratory factor analysis functions for assessing dimensionality. Package EFA dimensions.
  • Reckase, M. D. (1979). Unifactor latent trait models applied to multifactor tests: Results and implications. Journal of Educational Statistics, 4(3), 207-230. https://doi.org/10.2307/1164671
  • Reckase, M. D., Ackerman, T. A., & Carlson, J. E. (1988). Building a unidimensional test using multidimensional items. Journal of Educational Measurement, 25(3), 193-203. https://doi.org/10.1111/j.1745-3984.1988.tb00302.x
  • Revelle, W. (2007). Determining the number of factors: The example of the NEO-PI-R. http://personality-project.org/r/book/numberoffactors.pdf.
  • Silverstein, A. B. (1977). Comparison of two criteria for determining the number of factors. Psychological Reports, 41, 387-390. https://doi.org/10.2466/pr0.1977.41.2.387
  • Silverstein, A. B. (1987). Note on the paralel analysis criterion for determining the number of common factor or principal components. Psychological Reports, 61, 351-354. https://doi.org/10.2466/pr0.1987.61.2.351
  • Stout, W. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52(4), 589-617. https://doi.org/10.1007/bf02294821
  • Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55, 293- 325. https://doi.org/10.21236/ada207301
  • Tate, R. (2012). Test dimensionality. In Tindal, G., & Haladyna, T. M. (Eds.), Large-scale assessment programs for all students: Validity, technical adequacy, and implementation (pp. 181-213). Routledge.
  • Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27(3), 159-203. https://doi.org/10.1177/0146621603027003
  • Thurstone, L. L. (1931). The measurement of social attitudes. The Journal of Abnormal and Social Psychology, 26(3), 249–269. https://doi.org/10.1037/h0070363
  • Velicer, W. F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 41, 321-327. https://doi.org/10.1007/bf02293557
  • Whitely, S. E., & Dawis, R. V. (1974). The nature of objectivity with the Rasch model. Journal of Educational Measurement, 11(3), 163-178. https://doi.org/10.1111/j.17453984.1974.tb00988.x
  • Zwick, W. R. & Velicer, W. F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99(3), 432-442. https://doi.org/10.1037/0033-2909.99.3.432
Toplam 35 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Alan Eğitimleri
Bölüm Makaleler
Yazarlar

Fulya Baris Pekmezci 0000-0001-6989-512X

Yayımlanma Tarihi 22 Aralık 2022
Gönderilme Tarihi 11 Temmuz 2022
Yayımlandığı Sayı Yıl 2022 Cilt: 35 Sayı: 3

Kaynak Göster

APA Baris Pekmezci, F. (2022). Tek Boyutluluktan Çok Boyutluluğa Boyutluluğun Çerçevesi. Journal of Uludag University Faculty of Education, 35(3), 516-531. https://doi.org/10.19171/uefad.1143164