Don MacLeod's recent article in the Education Guardian on the latest QAA report made interesting reading. Not so much because it questioned the rigour of using external examiners but more for the comments about the lack of feedback to the students. Quote “The single intervention by universities and colleges that would improve the quality of the student experience would be the improvement of assessment practices … In a substantial number of cases, feedback on students' work could be more extensive and appropriate. Feedback was sometime subject to prolonged delay, and annotated comments were often perfunctory. Such practice was seen as underming the principal purpose of providing feedback to promote students' learning.”
I suspect that the reasons for feedback not being incorporated either from external examiners or to students are many, but time and increased student numbers are likely to be contributing factors.
Graham Gibbs and Clare Simpson in their article 'Does your assessment support your students' learning?' (Journal of Teaching and Learning in Higher Education, awaiting publication) argue that formative, self-assessment and peer-assessment and coursework all allow for better feedback opportunities than assessments 'tagged' on the end of a course. They outline the conditions under which assessments can support learning including looking at the quantity, timing and quality of feedback.
By way of example, here's a mini-case study from my own institution; one which I'm sure is very familiar to those working at the 'coal face' in higher education.
A lecturer has sole responsibility for over 200 students making the opportunities for feedback to individual or even small seminar size groups of students practically impossible. Instead of just accepting this, all too common, scenario we are developing and testing a prototype bespoke tool which we have built on weblog principles. Our GroupLog aids the lecturer in providing meaningful feedback to all students.
An activity is created and posted by the lecturer. Students work in groups to research the activity and formulate their response. Each group then privately posts their response (in a similar way to the way I'm writing this article) back to system. The lecturer can review all the group responses and then posts their response, highlighting points made by students and pointing out gaps as necessary. One of the benefits we have found from our weblog-influenced tool is that as well as the students being able to view the lecturer's response they can also view all of the other groups responses; students, therefore, have the opportunity for meaningful feedback and peer-learning.
GroupLog is still very much a work in progress but students have been positive about it so far and we hope to develop the concept and tool further.