Aslett, Helen J. (2006) Reducing variability, increasing reliability: exploring the psychology of intra- and inter-rater reliability. Investigations in university teaching and learning, 4 (1). pp. 86-91. ISSN 1740-5106
InvestigationsInUniversityTeachingAndLearning v4n1 p86-91.pdf - Published Version
Download (201kB) | Preview
Reliability relates to the fairness and consistency of assessment. Section 7 of the Quality Assurance Agency (QAA) Code of Practice (2000) requires that: “Institutions have transparent and fair mechanisms for marking and moderating marks”. At an institutional level the London Metropolitan Assessment Framework states: “There should be consistency among assessors in the marking of student work against relevant criteria” (Section A2:2). Therefore it is vital that methods of assessment have strong reliability. However, the reliability of the assessment process should never be assumed.
There are two main forms of reliability: intra- and inter- rater reliability. Intra-rater reliability is the internal consistency of an individual marker. Inter-rater reliability is the consistency between two or more markers. The former should perhaps be considered the more important of the two as without internal consistency over a series of scripts the marks assigned will be haphazard and unjustifiable and no form of moderation or second marking will be able to resolve this. In this paper some of the key psychological variables that can potential impinge on examiner reliability will be examined.
|Uncontrolled Keywords:||Investigations in university teaching and learning, examiner reliability, assessment, moderation, second-marking|
|Subjects:||300 Social sciences > 370 Education|
|Department:||School of Social Professions
Centre for Professional Education and Development (CPED)
|Depositing User:||David Pester|
|Date Deposited:||09 Apr 2015 12:22|
|Last Modified:||13 Oct 2016 10:50|
Actions (login required)