Student Reflection, Our Reflection: Considering Peer Feedback and Action Research

‘Built into action research is the proviso that if as a teacher I am dissatisfied with what is already going on, I will have the confidence and resolution to attempt to change it. I will not be content with the status quo…’

(McNiff 1988)

At our next meeting, on the 19th April, we take a two-pronged approach informed by, and building upon, issues raised in previous sessions.  We’ll be exploring the concept of student peer assessment and feedback, as outlined in two papers where action research has been the primary methodological approach used.  In addition to allowing some critical exploration around the issues inherent in implementing peer assessment, this will also provide an opportunity to discuss the merits and limitations of a method of higher education research that we have not yet encountered in our consideration of previous papers.

Action Research, as it is practised within the UK education context, is a process of direct enquiry into practice.  The researcher is, often, an active participant in the activity being investigated (a teacher, a student etc.) whilst the research itself is situated within a real, live context rather than within a constructed and controlled experimental environment.  Action research will normally involve the adoption of critical self and/or collaborative reflection and is particularly characterised by the objective to understand, change and improve practice.

Action Research Graphic

The two papers we will consider, one from a UK context and one from a New Zealand context, both seek to explore what challenges are likely to be faced in implementing a system of peer assessment and feedback and how these challenges might be addressed.

As a help towards discussion in our session, we might begin by thinking about the following questions:

  • What do these studies have to say about the value of, and problems involved in, student peer-to-peer feedback, and what evidence base do they provide for adopting this assessment strategy ourselves? Do the interventions described in the articles ring true with your own experiences?
  • What do you make of the theoretical basis for these pieces? Is action research in fact research as we know it? How comfortable can we be in its findings? How does it work as a collaborative enterprise (Barnard et al) versus the individual approach (McMahon)? And what can it do for us in HE?

Meeting Report 

In addition to allowing some critical exploration around the issues inherent in implementing peer assessment, the session provided an opportunity to discuss the merits and limitations of a method of higher education research that we had not yet encountered in our consideration of previous papers. This emphasis on the methodology in our discussion was primarily due to our sense that, while the papers contained many points of interest, the research projects they described struck us as extremely limited in scope. It was consequently difficult to envision wider application of their results to our own practice. Cartney’s (2010) article made some astute observations on her students’ concerns and anxieties about the peer assessment process, especially around areas of fairness, accuracy, and parity of assessment between peers. We were interested in her account of needing to put reassuring safeguards in place, resulting in a higher rate of tutor intervention than she had anticipated, such as staff evaluating students’ feedback to their peers, and ensuring any students identified as being “at risk” of not passing were given extra tutor support. However, she notes in her conclusion that she was unable to evaluate the longer term impact of her innovation given that it was a “feedforward” process whose effect could only be measured over a longer timescale, which she identified as the question guiding her next Action Research cycle. This struck us as indicating a somewhat premature publication of partial results.

Bernard et al’s (2016) paper was rich in allusion to relevant teaching and learning theory, and was framed by a fairly robust literature review of current work on peer assessment, which summarised how “good organisation of feedback should include teacher’s cooperation, clarification of purpose, students’ involvement in developing criteria, peer matching, training, specification of task and time needed, providing guidelines, examining the quality of feedback on peer assessment” (933-4). Yet it struck us that there was something of a disconnect between the high standard of engagement in the article with existing scholarship and the self-acknowledged limitation of their project “beyond the immediate context” of the module they invesitgated (Bernard et al, 941).

These considerations prompted considerable wider discussion on the research practices and contexts informing these articles, dealing mainly with questions around the worth of Action Research as a methodological practice, and particularly as a basis for inquiries into issues in higher education. Where previous readings for our sessions had been representative of a naturalistic approach to research or relied on larger scale quantitative data and meta-data, here was a third methodological approach for dealing with the practice of (and practices in) HE. We noted that, on the whole, Action Research has had something of a lower status as a research practice: generally it has not been perceived as ‘proper’ educational research. However, the academic journals in which these particular pieces (and others like them) appeared told a slightly more complicated story. Ranked noticeably highly by various indices of quality, the worth of the journals could not simply be written off. That said, quality was felt to be a real issue in these pieces, and was, perhaps, an indicator of the still emergent nature of HE research. Such research would not stand up to the quality of the best subject-based research outputs- but then, perhaps this wasn’t what the articles intended to do.

This led to some bigger questions about the nature of our work. Is the practice of teaching wholly experience based, or is there a theoretical basis, a context that can be usefully applied? Much of Action Research seemed to be a direct write-up of practice, descriptions of ‘what we do anyway’, and it was sometimes hard to see the relevance of this beyond its individual contexts. The group considered ways in which Action Research interventions could be expanded in order to generate results that might be more profound or helpful to a broader audience. Particularly, we would have been interested in seeing the article that was ‘the next stage up’. Given the articles’ acknowledgement that their conclusions were necessarily interim and imprecise, it was felt that it might have been more interesting to wait until more concrete conclusions could be drawn from a more extended, longitudinally-based set of results. If longitudinal data was regarded as one missing element, comparative data was another: without a sense of what was happening elsewhere in a programme, it was difficult to judge whether the modular interventions described had led to any real improvement. Here, too, the anecdotal nature of the student voice in these pieces was felt to be compromising.

Given that the stated point of the articles was to share good practice to everyone’s benefit, the fact that, in the absence of clear results, said practice could not be put into effect did present something of a problem in the driver for publishing these pieces. One colleague noted, for example, that a pedagogical journal in their discipline had recently implemented a major change in editorial policy that requires all submissions to include a concrete recommendation for improving practice. The result of this change has been a significant reduction in the quantity of material published by the journal, but an improvement in its usefulness. At the other extreme is work in HE that describes itself as Action Research, but is driven by a greater emphasis on wider political and ideological critique and social intervention in learning and teaching. There are, for example, publications which proceed from the idea of Action Research as being potentially politically incisive, a vehicle for change, reform – and a practice which might be utilised as a means both to improve the nature of work at university, to resist impositions from outside academia, but also to build alternate spaces of learning with the wider community operating outside or on the fringes of “The University”. (Levin and Greenwood 2008; Somekh and Zeichner 2009).

Equally, consideration was given to other forms of practitioner research. For example, Exploratory Practice in language education might be regarded as being close to Action Research in nature, with a similarly intentioned integration of research, learning and teaching, and, arguably, similar issues in the robustness of that integration. But where Action Research might tend to gloss over the conditional nature of its practice, Exploratory Practice was felt to be more honest about its approach and thus capable of generating more useful results. Indeed, in the current climate of seemingly ever-changing contexts for higher education practice, some in the group felt that Exploratory Practice’s emphasis on the concomitant need for continual professional development was particularly useful.

There were, then, some difficulties encountered with Action Research. But we were also left with some timely questions: if the ideal of producing solid research into the practice of HE is sound, how exactly should we go about it? Given that we have discovered issues in various approaches, what exactly would make for rigorous scholarship in teaching and learning?

Works Cited 

  • Barnard, Roger, Rosemary de Luca and Jinrui Li. (2015) ‘First-year Undergraduate Students’ Perceptions of Lecturer and Peer Feedback: A New Zealand Action Research Project.’ Studies in Higher Education 40:5, pp. 933-944.
  • Cartney, Patricia. (2010) ‘Exploring the Use of Peer Assessment as a Vehicle for Closing the Gap Between Feedback Given and Feedback Used.’ Assessment and Evaluation in Higher Education 35:5, pp. 551-564.
  • Levin, Morten, and Davydd J. Greenwood. (2008) ‘The Future of Universities: Action Research and the Transformation of Higher Education.’ The SAGE Handbook of Action Research. London: SAGE Publications.
  • McNiff, Jean. (1988) Action Research: Principles and Practice. London: Routledge.
  • Somekh, Bridget, and Ken Zeichner. (2009) ‘Action Research for Educational Reform: Remodelling Action Research Theories and Practices in Local Contexts.’ Educational Action Research 17:1, pp. 5-21

These are available via Paperpile .

Coming up next ….

Our next meeting will be on the 9th June 2016, when we will be looking at research associated with the long-standing innovative programme design at Alverno College.

In addition, on the 22nd June 2016, we are pleased to welcome Professor Jerry Wellington  who will lead a workshop where participants will be provided with close guidance and practical advice on developing their pedagogical research and scholarship projects. This workshop is now full, although feel free to email if you would like to be added to the reserve list.

The Autumn Term will also see the launch of the York Scholarship of Teaching and Learning journal which will provide a platform for staff for internal dissemination in an academic format the findings of their scholarship.

Details will be available on our Google Community (UoY Staff only) and our SoTL webpage.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s