Computerized Adaptive Testing

Document Type : Original Article



Adaption of the item difficulty level to the subject's characteristics is the most efficient method in characteristics testing. To achieve this goal, a number of items were presented to the subjects to gain an exact understanding of their levels. The above mentioned issue comprises the fundamental idea underlying the adaptive testing which is influenced by the outcomes of the item-response theory in the analysis of a test questions. Adaptive testing is the natural result of Bayesian thinking method in estimating and analyzing the information function of the items and includes some inferences for the preparation and interpretation of the test. This method includes a number of simple and yet various approximations. A more complex method of adaptive testing is the Computerized Adaptive Testing (CAT), which includes a two stage process. In the first stage, an item will be presented to the subjects with the same difficulty level as their current abilities. Then, their abilities will be measured on the basis of their responses to the item of the first stage. These stages are repeated over and over to make the ultimate decision to end the test. Real planning, practical application, and CAT protection demand hard work including: item pools, test security, and examinee issues


افروز، غلامعلی و هومن، حیدرعلی (1375). روش تهیة آزمون هوش. تهران: دانشگاه تهران.
آلن، مری جی و ین، وندی ام (1979). مقدمه‌ای بر نظریه‌های اندازه‌گیری(روانسنجی). ترجمة علی دلاور، 1374، تهران: سمت.
ثرندایک، رابرت ال (1982). روان‌سنجی کاربردی. ترجمة حیدر علی هومن، 1369، تهران: دانشگاه تهران.
ستاری، بهزاد (1382). روان‌سنجی پیشرفته کاربردی. مشهد: به نشر.
Baghi, H.; Ferrara, S. F., & Gabrys, R. (1992). Student attitudes toward computer-adaptive test administrations. Paper presented at the annual meeting of the American Educational Research Assoc, San Francisco, CA.
Colton, G. D. (1998). Exam security and high-tech cheating, The Bar Examiner, 67(3), 13-35.
Eggen, T. J. H. M. (1990). Innovative procedures in the calibration of measurements scales. In W.H. Schreiber & K. Ingenkamp (eds). (pp.199- 212). International developments in large scale assessment. Windsor, Berkshire: NFER-NELSON
Eggen, T. J. H. M. (2007). Choices in CAT models in the context of educational testing. In D. J. Weiss (Ed.). Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing. Retrieved [date]
Hornke L. F. (2000). Item response times in computerized adaptive testing. Psicologica, 21, 157-173.
Legg, S. M. & Buhr, D. C. (1992). Computerized adaptive testing with different groups. Educational Measurement: Issues and practice, 11(2), 23-27.
Lord, F.M. (1983). Some test theory for tailored testing. In W.H. Holtzman (Ed.), Computer- assisted instruction, testing and guidance. New York: Harper & Row.
Olea J., J., Revuelta J., Ximenez M.C., & Abad F.H. (2000). Psychometric and psychological effects of review on computerized fixed and adaptive tests. Psicologica, 21, 175-189.
Sotaridonia L. S., Pornel J. B., & Vallejo A. (2003). Some applications of item response theory to testing. The Philippine Statistician, 52, 81-92.
Stone, G. E., & lunz, M. E. (1994). The effect of review on the psychometric characteristics of computerized adaptive tests. Applied Measurement in Education, 7, 211-222.
Stone, G. E., & Lunz, M. E. (1994). The effect of review on the psychometric characteristics of computerized adaptive tests. Applied Measurement in Education, 7, 211-222.
Sutton, R. E. (1997). Equity and high stakes testing: Implications for computerized testing. Equity and Excellence in Education, 30(1), 5-15.
Tao, Y.-H., Wu, Y.-L., & Chang, H.-Y. (2008). A Practical Computer Adaptive Testing Model for Small-Scale Scenarios. Educational Technology & Society, 11(3), 259–274.
Triantafillou, E., Georgiadou, E., & Anastasias A. (2006). CAT-MD: Computer Adaptive Test on Mobile Devices. Economides University of Macedonia, Egnatias 156, Thessaloniki 54006, GREECE
Van der Linden, W .J. & Glas, C. A. W. (Eds.) (2000). Computerized adaptive testing. Theory and practice. Dordrecht: Kluwer Academic Publishers.
Van der Linden, W. J. & Hambleton, R. K. (Eds.) (1996). Handbook of modern item response theory. New-York: Springer-Verlag.
Verhelst, N. D., & Glas, C. A. W. (1995). The one-parameter logistic model. In. G. H. Fischer & I. W. Molenaar (Eds.). Rasch models: Foundations, recent developments, and applications (pp.215-237). New York: Springer-Verlag.
Vispoel, W. P., Hendrickson, A. B., & Bleiler, T. (2000). Limiting answer review and change on computer adaptive vocabulary tests: Psychometric and attitudinal results. Journal of Educational Measurement, 37, 21-38.
Vispoel, W. P., Rocklin, T. R., & Wang, T. (1994). Individual differences and test administration procedures: A comparison of fixed-item, computerized adaptive, and self-adapted testing. Applied Measurement in Education, 7, 53-59.
Wainer, H. (1983). Some practical considerations when converting a linearly administered test to an adaptive format. Educational Measurement: Issues and Practice, 12, 15-20.
Wainer, H., (Ed.) (2000). Computerized adaptive testing: A primer (2nd Edition). Hillsdale, NJ: Erlbaum.
Way, W. D. (1998). Protecting the integrity of computerized testing item pools. Educational Measurement: Issues and Practices, 17 (4), 17-27.
Way, W. D. (2005). Practical Questions in Introducing Computerized Adaptive Testing for K-12 Assessments.Research Report 05-03.
Zara, A. R. (1992). An investigation of computerized adaptive testing for demographically-diverse candidates on the national registered licensure examination. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA.