Bennett, James (2006) Second-Marking and the Academic Interpretative Community: ensuring reliability, consistency and objectivity between markers. Investigations in university teaching and learning, 4 (1). pp. 80-85. ISSN 1740-5106
A central issue to any attempt to improve and monitor inter-marker reliability revolves around the need to comply with the QAA Code of Practice’s stipulation that ‘institutions must have transparent and fair mechanisms for marking and moderating marks’ (Section 7). Various research theories and practices have been designed by different academic practitioners to ensure this goal is met, but these often focused on different moments in the assessment process. Thus White’s (2000) article on monitoring argues for an attention to the end of the marking process by making second markers write a report focusing on the behaviour of the first marker, rather than on the student’s individual paper as in double-marking. In contrast, Sandhu’s (2004) article ‘The usage of statement banks as a means of assessing large groups’ suggests that not only can objective criteria be made known through assessment schemes published at the start of the assessment process, but also through the use of standardised feedback statements that clearly indicate where a student’s piece of work lies within the assessment guidelines.
For the newly appointed academic, arguably the response to this plethora of options and theories is to recognise that no one method alone can offer solutions to the problems of fairness, transparency and reliability consistently encountered between markers. Although I feel this conclusion, is valid something about all these techniques felt rather unsatisfying; namely, that all approaches still relied on markers interpreting one document or another in the same manner. That is, markers had to all belong to the same ‘interpretative community’ not only in order to agree on marks, but also to experience the kind of ‘systematic disagreement’ that White identifies as an area of healthy debate in academic assessment and easily resolved by process monitoring (2000). I am largely in favour of White’s support of monitoring but want to consider here some of the pre-conditions necessary for a system of inter-marker reliability to work consistently. In this article, I therefore examine the centrality of inter-marker reliability to good academic practice. I then go on to outline debates about the interpretative community, sketching these theories on to the issues involved in university assessments, particularly the challenges of integrating hourly paid lecturers (HPLs) into this community.
Download (189kB) | Preview
View Item |