Reducing variability, increasing reliability: exploring the psychology of intra- and inter-rater reliability

Aslett, Helen J. (2006) Reducing variability, increasing reliability: exploring the psychology of intra- and inter-rater reliability. Investigations in university teaching and learning, 4 (1). pp. 86-91. ISSN 1740-5106

Abstract

Reliability relates to the fairness and consistency of assessment. Section 7 of the Quality Assurance Agency (QAA) Code of Practice (2000) requires that: “Institutions have transparent and fair mechanisms for marking and moderating marks”. At an institutional level the London Metropolitan Assessment Framework states: “There should be consistency among assessors in the marking of student work against relevant criteria” (Section A2:2). Therefore it is vital that methods of assessment have strong reliability. However, the reliability of the assessment process should never be assumed.

There are two main forms of reliability: intra- and inter- rater reliability. Intra-rater reliability is the internal consistency of an individual marker. Inter-rater reliability is the consistency between two or more markers. The former should perhaps be considered the more important of the two as without internal consistency over a series of scripts the marks assigned will be haphazard and unjustifiable and no form of moderation or second marking will be able to resolve this. In this paper some of the key psychological variables that can potential impinge on examiner reliability will be examined.

Documents
222:1212
[thumbnail of InvestigationsInUniversityTeachingAndLearning v4n1 p86-91.pdf]
Preview
InvestigationsInUniversityTeachingAndLearning v4n1 p86-91.pdf - Published Version

Download (201kB) | Preview
Details
Record
View Item View Item