Implementing assessment innovations in higher education
|PhD ceremony:||Ms A.J. (Anja) Boevé|
|When:||May 14, 2018|
|Supervisors:||prof. dr. R.R. (Rob) Meijer, prof. dr. R.J. (Roel) Bosker|
|Co-supervisor:||prof. dr. C.J. (Casper) Albers|
|Where:||Academy building RUG|
|Faculty:||Behavioural and Social Sciences|
In this thesis, several innovations related to assessment were investigated in the context of the education at the Faculty of Social and Behavioural Sciences. Important findings were that:
- There was no difference in exam results between multiple choice exams on paper or computer, although there was a difference in how students experienced the exams and which mode they preferred.
- Reporting subscores (e.g. for different levels of knowledge or different types of questions) of exam results is likely not to be of added value due to low reliability. Well-constructed open questions can contribute to better measurement precision.
- The amount of practice tests used by students was positively related to exam grades, although the strength of this relation differed between courses. Within a course, the grades for a year without practice questions did not differ from those where practice questions were offered.
- The pattern of study behavior for students in a flipped and non-flipped course did not appear to differ and was weakly related to student grades. Student acceptance of the flipped classroom and contextual factors may explain the extent to which students are willing to change their study behavior.
- Based on the amount of variation in grades over time and between courses it was demonstrated how the naturally expected fluctuation in grades may contribute to a better evaluation of the effectiveness of innovations in higher education.These findings provide insight in practical implementation considerations and avenues for further research into assessment innovations in higher education.