روش های تعیین ساختار عاملی (ابعاد) داده های آزمون براساس رویکردهای اندازه گیری سنتی و جدید

نوع مقاله : مقاله پژوهشی

نویسنده

استادیار دانشکده روانشناسی، دانشگاه خوارزمی، تهران، ایران

چکیده

هدف: معمولا ابعاد یا عامل­های یک آزمون از طریق تحلیل داده­های حاصل از اجرای آن به کمک روش­های آماری تحلیل عاملی اکتشافی و تاییدی بررسی می­شود. در طی زمان بسته به مدل­های نظری مختلف، روش­های مختلفی برای تعیین تعداد ابعاد یا عامل­ها ارایه شده است. هدف مقاله حاضر مرور منسجم روش­های عمده پرکاربرد برای این منظور و بررسی نقاط قوت و ضعف آنها است.
روش پژوهش: برای رسیدن به این هدف، روش­های مشخص کردن تعداد ابعاد داده­های آزمون­ها توصیف و عملکرد آنها در داده­های واقعی مورد بحث قرار گرفته است و در نهایت شرایط استفاده از هر یک از آنها توصیف شده است.
یافته‌ها: کاربرد روش­های مختلف تعیین ابعاد یا عامل­ها نیازمند بینش و درک پژوهشگر از مبانی و اصول این روش­ها، ماهیت داده­ها و شرایط موجود در آنها است.
نتیجه‌گیری: رویکردهای مختلف تعیین ابعاد تنها تحت شرایط متناسب با ماهیت آنها نتایج قابل اطمینان ارایه می­کنند در غیر این صورت تحلیل انجام شده با رویکردهای مختلف قابل اطمینان نیست.

کلیدواژه‌ها


عنوان مقاله [English]

Methods of Determining Factor Structure of the Test Data Based on Traditional and Modern Measurement Approaches

نویسنده [English]

  • Balal Izanloo
Assistant Professor, Faculty of Psychology and Education, Kharazmi University, Tehran, Iran
چکیده [English]

Objective: Usually, the dimensions or factors of a test are examined by analyzing the data obtained from its implementation using the statistical methods of exploratory and confirmatory factor analysis. Over time, depending on different theoretical models, different methods have been proposed to determine the number of dimensions or factors. The purpose of this article is to review the major commonly used methods for this purpose and to examine their strengths and weaknesses.
Methods: To achieve this goal, methods for determining the numbers of dimensions of test data are described and their performance in real data are discussed, and finally the conditions of use of each of them are described.
Results: The application of different methods of determining the dimensions or factors requires the researcher's insight and understanding of the basics and principles of these methods, the nature of the data and the conditions contained in them.
Conclusion: Different dimensional approaches provide reliable results only under conditions commensurate with their nature. Otherwise, the analysis performed with different approaches is not reliable.

کلیدواژه‌ها [English]

  • Keywords: latent trait theory
  • item response theory
  • factor
  • dimension
  • construct
  • factor analysis
Caron, P. O. (2019). Minimum average partial correlation and parallel analysis: The influence of oblique structures. Communications in Statistics-Simulation and Computation, 48(7), 2110-2117.
Chen, W. H., & Thissen, D. (1997). Local dependence indexes for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22(3), 265-289.
Chiu, C. Y. (2013). Statistical Refinement of the Q-matrix in Cognitive Diagnosis. Applied Psychological Measurement, 37(8), 598-618.
Cliff, N. (1988). The eigenvalues-greater-than-one rule and the reliability of components. Psychological bulletin, 103(2), 276.
Çokluk, Ö., & Koçak, D. (2016). Using Horn’s Parallel Analysis Method in Exploratory Factor Analysis for Determining the Number of Factors. Educational Sciences: Theory & Practice, 16(2).
Courtney, M. G. R., & Gordon, M. (2013). Determining the number of factors to retain in EFA: Using the SPSS R-Menu v2. 0 to make more judicious estimations. Practical Assessment, Research & Evaluation, 18(8), 1-14.
Dinno, A. (2009). Exploring the Sensitivity of Horn’s Parallel Analysis to the Distributional Form of Random Data. Multivariate Behavioral Research, 44(3), 362–388.
Dziak, J. J., Coffman, D. L., Lanza, S. T., & Li, R. (2012). Sensitivity and specificity of information criteria. The Methodology Center and Department of Statistics, Penn State, The Pennsylvania State University, 16(30), 140.
Fabozzi, F. J., Focardi, S. M., Rachev, S. T., & Arshanapalli, B. G. (2014). The Basics of Financial Econometrics: Tools, Concepts, and Asset Management Applications. John Wiley & Sons.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological methods, 4(3), 272.
Fraser C., McDonald R. P. (2012). NOHARM 4: A Windows program for fitting both unidimensional and multidimensional normal ogive models of latent trait theory.
Garrido, L. E., Abad, F. J., & Ponsoda, V. (2011). Performance of Velicer’s minimum average partial factor retention method with categorical variables. Educational and Psychological Measurement, 71(3), 551-570.
Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2014). Pearson new international edition. In Multivariate data analysis, Seventh Edition. Pearson Education Limited Harlow, Essex.
Hattie, J. (1985). Methodology review: assessing unidimensionality of tests and ltenls. Applied Psychological Measurement, 9(2), 139-164.
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179-185.
Horn, J. L., & Engstrom, R. (1979). Cattell’s scree test in relation to Bartlett’s chi-square test and other observations on the number of factors problem. Multivariate Behavioral Research, 14(3), 283-300.
Houts, C. R., & Edwards, M. C. (2013). The performance of local dependence measures with psychological data. Applied Psychological Measurement, 37(7), 541-562.
Humphreys, L. G. (1964). Number of cases and number of factors: An example where N is very large. Educational and Psychological Measurement, 24(3), 457-466.
Kim, J. P. (2001). Proximity measures and cluster analyses in multidimensional item response theory. Unpublished doctoral dissertation, Michigan State University, East Lansing, MI.
Kuha, J. (2004). AIC and BIC: Comparisons of Assumptions and Performance. Sociological methods & research, 33(2), 188-229.
Levy, R., & Svetina, D. (2011). A generalized dimensionality discrepancy measure for dimensionality assessment in multidimensional item response theory. British Journal of Mathematical and Statistical Psychology, 64(2), 208-232.
Lord, F. M., & Novick, M. R. (1968). statistical theories of mental test scores: Addison Wesley.
McDonald, R. P. (1981). The dimensionality of tests and items. British Journal of mathematical and statistical psychology, 34(1), 100-117.
Pearson, R., Mundfrom, D., & Piccone, A. (2013). A comparison of ten methods for determining the number of factors in exploratory factor analysis. Multiple Linear Regression Viewpoints, 39(1), 1-15.
Peterson, R. A. (2000). A meta-analysis of variance accounted for and factor loadings in exploratory factor analysis. Marketing letters, 11(3), 261-275.
Philip Chalmers (2012). mirt: A Multidimensional Item Response Theory Package for the R Environment. Journal of Statistical  Software, 48(6), 1-29. doi:10.18637/jss.v048.i06
Preacher, K. J., Zhang, G., Kim, C., & Mels, G. (2013). Choosing the optimal number of factors in exploratory factor analysis: A model selection perspective. Multivariate Behavioral Research, 48(1), 28-56.
Raiche, G., & Magis, D. (2010). nFactors: Parallel Analysis and Non-Graphical Solutions to the Cattell Scree Test. R Package Version 2.3. 3.
Raîche, G., Walls, T. A., Magis, D., Riopel, M., & Blais, J. G. (2013). Non-graphical solutions for Cattell’s scree test. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(1), 23.
Reckase, M. D. (2009). Multidimensional item response theory models. In Multidimensional item response theory (pp. 79-112). Springer, New York, NY.
Revelle, W. (1979). Hierarchical cluster analysis and the internal structure of tests. Multivariate Behavioral Research, 14(1), 57-74.
Revelle, W. (2017). How To Use the psych package for Factor Analysis and data reduction.
Revelle, W. (2018) psych: Procedures for Personality and Psychological Research, Northwestern University, Evanston, Illinois, USA, https://CRAN.R-project.org/package=psych Version = 1.8.12.
Revelle, W., & Rocklin, T. (1979). Very simple structure: An alternative procedure for estimating the optimal number of interpretable factors. Multivariate Behavioral Research, 14(4), 403-414.
Rietveld, T., & Van Hout, R. (1993). Statistical techniques for the study of language behaviour. Berlijn: Mouton de Gruyter.
Robitzsch, A. (2021). sirt: Supplementary Item Response Theory   Models. R package version 3.10-118.  https://CRAN.R-project.org/package=sirt
Ruscio, J., & Roche, B. (2012). Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychological assessment, 24(2), 282.
Shaycoft, M. F. (1970, March). The eigenvalue myth and the dimension-reduction fallacy. In meeting of the American Educational Research Association, Minneapolis.
Silverstein, A. B. (1987). Note on the parallel analysis criterion for determining the number of common factor or principal components. Psychological Reports, 61, 351–354.
Stellefson, M., & Hanik, B. (2008). Strategies for Determining the Number of Factors to Retain in Exploratory Factor Analysis. Online Submission.
Stout, W. F. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52(4), 589-617.
Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55(2), 293-325.
Streiner, D. L. (1994). Figuring out factors: the use and misuse of factor analysis. The Canadian Journal of Psychiatry, 39(3), 135-140.
Svetina, D., & Levy, R. (2012). An Overview of Software for Conducting Dimensionality Assessment in Multidimensional Models. Applied Psychological Measurement, 36(8), 659-669. doi: 10.1177/0146621612454593
Svetina, D., & Levy, R. (2014). A framework for dimensionality assessment for multidimensional item response models. Educational Assessment, 19(1), 35-57.
Tabachnick, B. G. and Fidell, L. S. (2014) Using Multivariate Statistics. 6th edn. Harlow: Pearson.
Tate, R. (2003). A comparison of selected empirical methods for assessing theStructure of responses to test items. Applied Psychological Measurement,27, 159-203.
Velicer, W. F., Eaton, C. A., & Fava, J. L. (2000). Construct explication through factor or component analysis: A review and evaluation of alternative procedures for determining the number of factors or components. Problems and solutions in human assessment, 41-71.
Ward Jr, J. H. (1963). Hierarchical grouping to optimize an objective function. Journal of the American statistical association, 58(301), 236-244.
Yen, W. M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8(2), 125-145.
Zwick, W. R., & Velicer, W. F. (1982). Factors influencing four rules for determining the number of components to retain. Multivariate behavioral research, 17(2), 253-269.
Zwick, W. R., & Velicer, W. F. (1986). Comparison of five rules for determining the number of components to retain. Psychological bulletin, 99(3), 432.