Model Choice in Educational and Psychological Testing: Practical Consequences of using the Wrong Model
Statistical models typically do not fit data perfectly. Nevertheless, practitioners rely on them to draw conclusions that they hope to be as reliable and valid as possible. The question that needs to be answered is: What are the practical consequences for using models that violate one or more assumptions to varying degrees? This project focuses in particular in item response theory (IRT) models, which encompasses a large set of models widely popular in education, psychology, and health research. IRT is commonly used to construct tests and scales and to evaluate the psychometric quality of existing psychological measurement instruments. This project consists of a set of both simulation and empirical analyses. The goal is to assess whether the main conclusions in empirical research hold under different IRT models and under different violations of IRT models. A second aim is to develop tools that can be used as effect size measures and that provide researchers practical information to what extent model choice and violations of model assumptions are important. These effect size measures will be formulated in terms of directly interpretable quantities like number of correct/incorrect decisions being made using the test.
Researchers and partners
Behavioural and Social Sciences, Psychology
- prof. dr. R.R. (Rob) Meijer, Psychometrics and Statistics
- dr. J.N. (Jorge) Tendeiro, Psychometrics and Statistics
- D.R. (Daniela) Crisan, MSc. Psychometrics and Statistics
Courses connected to this project
- Test Theory (PSBE2-06)
- Crisan, Tendeiro, and Meijer (in press). Investigating the Practical Consequences of Model Misfit in Unidimensional IRT Models. Applied Psychological Measurement.
|Last modified:||09 August 2021 4.06 p.m.|