Predicting item difficulty of science national curriculum tests

Tuesday, November 8th, 2016

Yasmine El Masri has published a journal article co-authored with Steve Ferrara, Peter W. Foltz and Jo-Anne Baird.

El Masri, Y.H., Ferrara, S., Foltz, P.W. & Baird, J. (2016) Predicting item difficulty of science national curriculum tests: the case of key stage 2 assessments, The Curriculum Journal [online].

Click here to download – the first 50 downloads are free.

Abstract: Predicting item difficulty is highly important in education for both teachers and item writers. Despite identifying a large number of explanatory variables, predicting item difficulty remains a challenge in educational assessment with empirical attempts rarely exceeding 25% of variance explained.

This paper analyses 216 science items of key stage 2 tests which are national sampling assessments administered to 11 year olds in England. Potential predictors (topic, subtopic, concept, question type, nature of stimulus, depth of knowledge and linguistic variables) were considered in the analysis. Coding frameworks employed in similar studies were adapted and employed by two coders to independently rate items. Linguistic demands were gauged using a computational linguistic facility. The stepwise regression models predicted 23% of the variance with extended constructed questions and photos being the main predictors of item difficulty.

While a substantial part of unexplained variance could be attributed to the unpredictable interaction of variables, we argue that progress in this area requires improvement in the theories and the methods employed. Future research needs to be centred on improving coding frameworks as well as developing systematic training protocols for coders. These technical advances would pave the way to improved task design and reduced development costs of assessments.


Back to news listing