Skip to main content
Disclaimer: This page is only for reference by staff and students at TEIs operating under the Common Awards scheme.
Durham University staff and students should instead refer to the Learning and Teaching Handbook here.

The information on this page is reviewed every three months.

 

Extract from Durham University Learning & Teaching Handbook

6.1.1: University Policy on the Quality Assurance of Examination and Assessment

Outline of University Policy

 

1. The overall purposes of the University's quality assurance mechanisms within the assessment process are:

a. to guarantee that departments apply their own and the University's agreed marking criteria appropriately across the range of modules they teach;

b. to guarantee that departments/schools maintain an overall consistency of standards across their various modules;

c. to protect candidates against bias, conscious or otherwise, on the part of examiners (this statement should be read in the context of the University's clear statement in General Regulation VII that 'matters of academic judgment cannot be appealed’);

d. to enable the University to comply with the various regulatory requirements set by the Office for Students, and by other bodies which regulate the University or specific parts of it (such as PSRBs for accredited programmes, where we determine we need to comply with those requirements).

2. The University policy is therefore that all departments should have robust mechanisms for marking, and for the moderation and assurance of marks. These should be outlined in departmental assessment policies, monitored and approved via Education Committee (typically delegated via the Chairs of Faculty Education Committees). The departmental assessment policy – or a student-facing summary of that policy – must be made known to students.

3. Each departmental assessment policy will specify how core University requirements –the anonymous marking of examination scripts, the anonymous classification of degree results, the use of a mark proforma (or digital equivalent) for examination scripts, and the moderation of all summative assessment – are used within the department. Any deviation from core requirements would require exceptional approval from Education Committee.

4. Policies will also specify where, how and why any further assurance practices – such as anonymous marking of other assessments, the wider use of marking proforma, template or objective marking, or statistical moderation – are used within the Department. 

 

5. Core and further assurance practices are outlined below under the following headings:

  • Anonymity in Assessments & Assurance (paragraphs 6-13);

  • The Use of Mark Proforma & Templates (paragraphs 14-17);

  • Use of Sample and Full Moderation (paragraphs 18-34).

Anonymity in Assessments and Assurance

6. Anonymity is an important element in the University's strategy for the quality assurance of the assessment process. The rationale for anonymity is the protection of candidates against the possibility of bias in assessment, and the removal of potential for there to be a perception of bias on behalf of the student (providing reassurance for staff and students).

7. However, the desirability of anonymity is balanced against the need to be provide students with clear and tailored feedback, and the need to ensure assessment processes are efficient, thereby facilitating the return of marks and feedback in good time. For this reason, while anonymity is desirable in principle, the University accepts that it will not feasible, or necessarily the best option, in all circumstances. The following passages outline where anonymity must or could be preserved.

Anonymity in Marking

Anonymous marking of examination scripts

8. Anonymous codes are always used on physical examination scripts, and the University does not require that students are provided with specialised feedback on performance in individual examinations. In light of this, it continues to be University policy that all physical university examinations must be sat and marked anonymously.

Anonymous Marking of Coursework

9. Anonymous marking, although highly desirable, can conflict with the need to give feedback, especially in the case of coursework contributing to summative assessment. Feedback cannot be given on coursework before the end of the module if anonymity is to be preserved: the administrative complexities of using a separate code to mark each assignment anonymously are such as to make this an unrealistic option.

10. It is University policy to ensure that feedback to students on assessed coursework is a priority (LTH 6.1.5). This is because Education Committee judges the advantage to students in terms of their learning experience through receiving feedback to outweigh any disadvantage resulting from removing anonymity. Feedback on coursework will normally be given before the end of the module to support learning within the module. Therefore there is no requirement that coursework be marked anonymously, although departments/schools wishing to include this in their policies are free to do so.

Anonymous marking of major projects and dissertations

11. Anonymity is probably the most secure quality assurance process against bias and prejudice. However, in the case of major projects and dissertations it is sometimes inevitable that the supervisor will also be the first marker: this is often a consequence of maintaining the link between research expertise and this kind of project work. In such situations it is impossible to maintain anonymity within the marking process. Therefore:

a. if possible, major projects and dissertations should be marked under full anonymity;

b. where anonymity cannot be maintained for the first marker, if possible the second (and any subsequent) marking should be anonymous. To facilitate this, the work should be submitted using a code even though the first marker may know the identity of the student.

Anonymity in meetings of Boards of Examiners

Anonymous consideration of students at boards of examiners

12. All meetings of boards of examiners, including those considering Level 1 marks, should be carried out with the students under consideration remaining anonymous. Consideration of progression, consideration of academic discretion and/or discretion in light of serious adverse circumstances, consideration of allegations of misconduct, consideration of appeals, and the classification of degrees must all be carried out anonymously.

13. This means that boards of examiners must make use of anonymised mark sheets and only the chair, secretary or other designated member(s) of the Board should have access to detailed medical and other evidence of serious adverse circumstances naming the student(s) concerned. They should communicate the necessary information to the board using the anonymous code (and may share redacted summary information if required in particularly complex cases). The minutes of the board of examiners should also refer to students by code. 

The Use of Mark Proforma & Templates

Annotation of Scripts

14. No marks or judgmental comments are to be written on examination scripts. It is, however, permitted to make factual annotations where these assist the marking process, for example in marking a language exercise or a mathematical problem. Marks or judgmental comments may be written on summative coursework, in order to support the provision of effective feedback to students. More substantive comments should however be summarised in written feedback provided to the student (see LTH 6.1.5 on the provision of feedback on assessed work).

Use of a Mark Proforma

15. A mark proforma (or digital equivalent) is a separate sheet on which the mark itself and the rationale for the mark awarded are recorded. The proforma is used in the assurance process. Proforma (or digital equivalents) must be used for all examination scripts, and may be used for other forms of assessment (however, as students will be provided with written feedback on all other forms of assessment, and such feedback will include the information noted under 16a-b and d, proforma are not required).

16. The proforma should:

a. reflect the agreed level descriptors and assessment criteria for the work concerned;

b. include a brief statement by the marker of the rationale for the mark awarded (consistent with the assessment criteria);

c. include, where appropriate, evidence of communication between markers and the rationale for the agreed mark reached or for failure to reach agreement and, in such an event, the steps then taken (e.g. to refer the work elsewhere internally, or to the external examiner);

d. be retained for the duration of the period for which assessment is retained, and be kept with the assessed work (see also Learning and Teaching Handbook 6.1.4).

Marking Templates

17. Marking to a template involves marking to a specified set of answers with marks clearly allocated for each element of the work. Objective marking is closely related to marking to a template but may include also marking by computer (e.g. of a multiple choice question using an optical mark reader). Where marking templates or objective marking is used, departments should make use of sample moderation to check that templates and patterns have been marked correctly.

Use of Sample and Full Moderation (including double-marking)

General principles of Moderation

18. Full Moderation (including double marking) and Sample Moderation are distinct mechanisms which may be used as part of the process of quality assurance of assessment – these terms are defined below. 

19. Departments should clearly distinguish between the use of full moderation (as applied to all scripts in a run) and sample moderation (of a sample of scripts) in their assessment policies as each serves a different purpose and imposes different actions upon examiners.

20. Sample moderation is the University norm, and is appropriate for use with the majority of pieces of assessed work. While full and sample moderation are different processes, they may be used in combination, as outlined in paragraph 34, below.

21. Statistical tools and techniques may also be used to review assessment outcome. These can enable departments/schools to identify any modules or assessments where marking profiles are out of line with departmental norms; and/or individual candidate performance on particular modules which appears to be out of line with their overall performance, so that the underlying causes can be addressed.

Sample Moderation

Definition and Purpose

22. Sample Moderation is a process which enables a moderator to test for (and if necessary, identify) any systematic defects in the initial marking process, by reviewing an appropriate sample of assessed work. Guidance on appropriate samples is given in paragraphs 26-27, below.

23. When using sample moderation, the role of the moderator is to ensure that the scale, range and standards of marking are appropriate, with any recommendations for change based upon the identification of systematic issues with the that marking, and resolutions being applied systematically to either the whole run of scripts, or a particular subset of those scripts (e.g. those at the pass/fail boundary).

Taking action where issues are identified

24. If sample moderation reveals a pattern of excessively generous or punitive marking, omission or over-emphasis of some element of answers, large fluctuations in marks, or use of an excessively narrow range of marks, then this should be rectified by an appropriate systematic review of the marks. This may involve double-marking all work, but could also be a review of lesser scope, for instance of the marks for one question on an exam paper or within a particular mark range. It is expected that all scripts displaying the same general issue(s) will normally be double-marked by the moderator (or an identified third party), or be subject to further review by the first marker, in line with departmental assessment policy.

25. If sample moderation suggests that individual marks as anomalous, as opposed to being part of a wider pattern, those marks should be referred for discussion and/or further investigation by a third party. As the moderator has not reviewed all scripts, the initial mark(s) should not be altered. A change of mark should not be recommended until the anomaly has been confirmed by a full systematic check.

Appropriate Sample Size

26. In order that the department and the University should have confidence in the robustness of the moderation procedure without laying undue burdens on departments, the following acceptable minimum proportions of examination papers and assessed coursework should be reviewed:

a. for all examination scripts and summatively assessed coursework, a minimum of 10% of each piece of assessed work contributing to the final module mark should be second marked for moderation (subject to a minimum sample size of 10);

b. Where there is more than one first marker for a piece of assessment (i.e. where the first marking of a run of examination scripts or coursework assessments is divided between two or more markers), 10% of the scripts/assessments first marked by each marker must be moderated;

c. for major projects and dissertations, the sample size must be 100% i.e. all major projects and dissertations should be moderated and/or double-marked (as a 100% sample is used, where work is double-marked the guidance issued in paragraph 32 should be followed for agreeing marks and resolving differences. An audit trail of this process must also be maintained as stated in paragraph 33).

27. Where sample moderation is used, it is not satisfactory for the selection of scripts for second marking to depend entirely on the first marker's identification of work which seems to them to be problematic. While problematic scripts can be reviewed, this should be as part of either an otherwise random sample, or a sample one drawn in equal portions from the top, bottom, and middle of the marking range, of at least the size stipulated above, must also be scrutinised.

28. A clear audit trail must be maintained where moderation has taken place. This should demonstrate the samples that have been considered, any systematic issues identified, and details of the actions taken to rectify these. One means of achieving this is by the use of a mark proforma or digital equivalent (see paragraphs 15-16).

Full Moderation (including double marking)

29. Full moderation requires the systematic review of all pieces of work for a given assessment. The purpose of full moderation is to provide quality assurance in regard to the marks assigned to individual scripts or coursework by providing combined academic judgement through the agreement of marks between first and second examiners and the resolution of problematic cases. In the case of work which is not marked anonymously it also provides some assurance against conscious or unconscious bias on the part of the first marker (who may have been a student’s tutor or supervisor).

30. Full moderation is used where there is value in reviewing every piece of submitted work for a given assessment. As such, it should be applied to all dissertations and major projects, given the more individual nature of these assessments. However, there is no requirement for full moderation of other types of assessed work (where sample moderation will normally be used).

31. There is no University requirement that full moderation must involve the double-marking of all assessments (where a separate mark is allocated by each marker), or be carried out blind or unseen (where the first marker's marks and the rationale for them are not communicated to the second marker until after they have completed their checking and/or marking), as this would not always be practical or beneficial. For example, for specialised modules or assessments where a department had limited expert markers, a moderator would consider the consistency of marking, but could not comment on the details of the material, and must be guided by the first marker in regard to individual marks.

32. As all scripts are reviewed as part of a full moderation process, it is possible for the second marker/moderator to recommend that individual marks awarded by the first marker be reviewed or changed. There should be a clear procedure in place for the agreement of marks between markers and for the resolution of any differences. An example of such a procedure might be the following:

a. a discrepancy of <5% in the mark for the module as a whole and which does not span a classification border is to be resolved by taking the average of the two marks;

b. a discrepancy of >5% in the mark for the module as a whole or spanning a classification border is to be resolved by discussion between the markers to reach an agreed mark if possible;

c. if agreement cannot be achieved refer to a third party (which may include an external examiner).

33. Wherever full moderation is used there should be a clear 'audit trail' showing the rationale for the mark reached by each marker, and the communication between them to reach an agreed mark. One means of achieving this is by the use of a mark proforma (see paragraphs 15-16). Raw marks, as well as reconciled marks, should be made available to external examiners.

Combining Sample Moderation and Double-Marking

34. A department may consider it appropriate to use both double-marking and sample moderation as mechanisms of quality assurance for the same assessment (for instance, double-marking all fails and borderlines to provide a combined academic judgement, and moderating a sample of all other scripts to monitor the quality and consistency of marking; or utilising sample moderation but permitting first markers to refer individual problematic scripts for second marking). In such cases it is important that the departmental assessment policy distinguishes between the purposes of each mechanism, clearly setting out where each is to be applied and the action required from the second examiner (e.g. that for the moderated sample marks must not be altered as not all students would have been treated the same way). This is particularly important where the same second examiner will operate in both roles (i.e. as second marker and as moderator).