|
Faculty
of Science Learning,
Teaching and Assessment Information and Resources |
Staff
development events arranged by the Faculty
School Research and Practice in HE Seminar hosted by PBS Dr Jill Barber,
University of Manchester, Five go
marking an examination question: the use of adaptive comparative judgement to
remove subjective bias Comparative
judgement can be used to manage subjectivity in assessment, leading to
demonstrable fairness in the marking of open-ended questions, which are not
easily described by detailed marking schemes.
The assessor (or judge) merely compares two answers and chooses a
winner1,2. The use of a suitable sorting algorithm means that repeated comparisons
lead to scripts sorted in order of merit.
Boundaries are determined by separate review of scripts. This session
will recount experiences of using this approach and will include a hands-on
workshop with the supporting software. We have used ACJ
(Adaptive Comparative Judgement) software in the marking of a final year
Global Health unit. Students study all the top 15 causes of premature death
worldwide, and research three of these causes in depth. An example of a short
essay question in the online examination is: "Many of the
principal causes of premature death can be reduced by the application of
small measures by many people. Imagine yourself in your future career,
perhaps as a community pharmacist or a chemistry teacher. Describe three
simple interventions that you could introduce to reduce the death rate.
Identify the disease or other cause of death, the intervention and why you
believe it would help." Answers may address
such disparate themes as HIV/AIDS, cancer and road traffic accidents.
Students are assessed less on knowledge and more on how they are able to
apply their knowledge. Thus the precise area of expertise of the assessor is
less important than in some assessments. We used peer
assessment for marking a question in a mock examination. Students (n=50)
typically made 9 comparisons of their peers' work and the instructor
determined the grade boundaries. Students answered a short questionnaire; the
majority found the assessment process useful, and not too time consuming
(total about 30 minutes) but would prefer a smaller number of judgments. The
corresponding question in the summative examination was marked using ACJ by
staff, who completed a similar questionnaire. It was possible to compare ACJ
marks with marks obtained by classical methods. There were substantial
discrepancies, with the ACJ marks being judged more accurate overall. Thurstone, L.L.
(1927). A law of comparative judgement. Psychological Review, 34, 273-286. Pollitt, A. (2012).
The method of adaptive comparative judgement. Assessment in Education:
Principles, Policy & Practice, 19, 281-300. Steedle, J. T.,
& Ferrara, S. (2016). Evaluating Comparative Judgment as an Approach to
Essay Scoring. Applied Measurement in Education, 29, 211-223. |
Maintained by SCSADE@ljmu.ac.uk.
Last Update: 15/09/2018.