Student Engagement

Our meeting on the 11th March 2016 addresses the topic of student engagement, something of a buzzword in recent years.  The focus will be on whether and to what extent the concepts of ‘student engagement’ and ‘disengagement’ help us to understand the complex factors underpinning current concerns in teaching and learning in higher education. The literature seems to assume that ‘engagement’ in its various conceptions equates with positive outcomes, student satisfaction and development, while disengagement seems to be defined by poor performance, failure and lack of student retention. This understanding of engagement/disengagement might be a reflection of the way research has been too narrowly focused on engagement as a way of measuring quality teaching and learning, but in light of previous SoTL discussions, the terms seem to represent another reductionist dichotomy (deep –surface learning/entity- incremental intelligence theory) and force us to view students as either engaged or disengaged.

Leach and Zepke (2011) propose a number of perspectives for understanding engagement and test them with interview data drawn from nine different tertiary level institutions in New Zealand. They take up a more widely focused position with the addition to their initial framework of aspects such as active citizenship and non-institutional challenges.  Colin Bryson (2014) provides a UK perspective with his overview of the area, suggesting that engagement manifests itself in different spheres or levels- so it can be at task level, module, programme and social levels outside the classroom. He suggests approaching engagement / disengagement in terms of  ‘Engaging Students’ and ‘Students Engaging’, the former focusing on what teachers and institutions can do while the latter focuses on students as individuals and highlights complex factors and dynamic processes. In the introductory chapter to his recent edited collection of essays on student engagement he outlines ten principles drawn from his review of the literature (2014,18-19).

Some questions for the discussion:

  • How can we measure engagement, given the complex and dynamic set of factors identified in the literature?
  • In light of the ‘non-institutional’ challenges identified, do we need to question the importance given to ways in which students behave and how they spend their time outside the classroom, since the University has arguably ceased to be the centre of student life, with more and more students working to fund studies, and campus life becoming less relevant?
  • To what extent do you agree with Bryson (2014) that there is much overlap in the six perspectives that Leach and Zepke suggest and that their organizer is an unsatisfactory integration of complex factors?

Meeting Report

The articles chosen for this SoTL meeting provoked rich discussion, one that arguably put paid to the question raised in the briefing about whether the principle of “student engagement” risked collapsing into yet another, potentially reductive, dichotomy akin to those we have considered so far this year (e.g. deep/surface learning). If anything, our conversation circled around much more fundamental problems of defining the term decisively, of endeavouring to picture what an engaged student actually looked like, and of questioning the causal relationship between student engagement and the ranked examples of teaching practices that both staff and students would all recognise as offering positive benefits to student’s experience and learning more generally. This was, we felt, a particular problem in in Zepke et al (2014), especially given the poor return rate on their surveys, where minor variances in the demographic of respondents could easily produce wide variances in the ranked order of these practices. We also considered the research methodology used, which opened up a wider discussion on the place, purpose and usefulness of student surveys in HE research into teaching and learning.

One of our concerns with the research was whether the methods used to measure student engagement assumed a rather fixed model of an “engaged learner,” one whose exact characteristics were obscure to us. Reasoning through our own experiences and assumptions as well as the research we recognised that, at one extreme, there was a risk of an unrealistic bias characterising engaged students as proto-typical academics who are wholly immersed in their discipline.  At the other extreme, we acknowledged that the diversity of students’ motives for attending university might well mean a proportion of them were driven – or had been driven by parental pressure! – by a pragmatic decision to study subjects associated with a higher potential for future employment rather than from an intrinsic interest in the discipline. This driver need not, however, necessarily correlate with poor academic engagement. These students might well be, to use Entwistle and Ramsden’s term, “strategic learners” who are adept at doing as much as is needed to be successful (1982). A related category of student we noted would be those whose focus is somewhat more orientated on the extra- and supra-curricular opportunities afforded by going to university; which again might not necessarily compromise academic performance, and might even yield significant benefits in terms of future employment (e.g. through experience in print and broadcast journalism, volunteering, or other YUSU clubs and societies). In addition, we felt that the pattern and pace of student engagement was likely to fluctuate over the course of a programme, in part driven by the developmental transformations of students within the typical age range of undergraduates, which poses a problem with surveys that only capture a single slice in time. Does this not also pose a tautological problem with the methodology given the probability that students who participate in HE surveys regarding their university experience are also those who are most likely to be deeply engaged in that experience?  By the same token, not all engaged students necessarily care about completing surveys!

Given the possible variances over time, it was evident to us that “student engagement” was a highly individualised and dynamic process. How can one then measure it, especially if one takes into account, not just students’ academic study and the factors noted above, but also their entire personal and social nexus described in Bryson’s idea of “relational student engagement” (2014)? Is it possible to capture, quantify and map all the correlations of such a richly diverse array of human experience? Even if it were possible to do so, is it realistic to suggest we, either as HE institutions or as individual teachers, might be in any position to support students with the more personal qualities of this nexus, which Bryson clusters under students’ “Sense of Being,” and includes their “confidence”, “happiness”, “imagination”, and “self-knowledge”? (Bryson 2014, 10)

The second concern raised with surveys focused on the motivations and drivers for their use. We noted that the NSS seemed largely oriented around a “consumer satisfaction” evaluation model and briefly asked whether the international student engagement surveys discussed in the reading for the session were successful in offering the richer alternative they seemed to promise. We were sceptical about the inclusion of a “value for money” question in the recent British interpretation of a national student engagement survey (UKES); with many in the group arguing that it is the commodification of higher education driving these surveys, rather than an interest in evaluating good learning and teaching per se. We noted, for example, that it was hardly a coincidence that the introduction of UKES coincided with the restructuring of student fees.

Alternatively, were surveys measuring “student engagement” as a proxy for “good learning”? In which case, why try and accommodate the relational nexus at all? Why not just focus on the aspect of student’s HE experience that we can measure and influence: our actions as teachers and students’ learning? If we were to orientate our practice around “student engagement,” what might this look like? One proposition was that we could set our own institutional definition of the term and explicitly tell students that this was an expectation of studying at York. To which was countered the feasibility  of measuring (assessing?) students’ compliance with such an overarching expectation, particularly if we took into account the full range of complex factors involved in considering “student engagement” as outlined in the research and as emerged from our discussion. Conversely, a radical reconfiguration of HE teaching and assessment was proposed, which would have a “partnership model” of teacher/student relations at its heart, and which would ensure students had the opportunity to collaborate fully on the entire design, delivery and assessment of their learning. While many approved of that idea in principle, others did wonder if partnership was really what students wanted; especially given the variances in student motivation to enter into Higher Education study and the possible fluctuations in their level of engagement through the duration of their time at University.

These observations also reopened discussion already considered in previous weeks on how one can accurately capture and quantify effective learning and teaching, and particularly our sense that “evidence” of learning “in process” problematically assumed vocal and active participation on the part of students, and consequently failed to accommodate the more subtle learning processes such as active listening or the silent mental processing of, and critical reflection on, complex ideas. We were interested in Bryson’s claim that, ironically, surveys often failed to accurately capture students’ voices, as they inevitably funnelled their opinions into a narrow set of questions, whilst also failing to acknowledge institutional context (Bryson 2014).

This was not to say that we were entirely in disagreement with the research. We concurred, on the basis of our own experiences, that when students’ sense of identity, interpersonal relationships, and the “cultural capital” afforded by their social background facilitates ownership of their studies and a sense of belonging to a learning community then they are more likely to be academically “engaged.” We also acknowledged an obligation to cultivate a learning environment that might help foster these qualities in students, especially for those whose experience and backgrounds might put them at a disadvantage in cultivating this “relational” nexus surrounding and supporting their academic study. Our scepticism was, rather, concerned with the feasibility of measuring, evaluating and (most importantly) influencing the range of factors that can contribute positively to student’s overall engagement with their university experience.


Works Cited

  • Bryson, C (2014) “Clarifying the Concept of Student Engagement” in Bryson, C (ed) (2014) Understanding and Developing Student Engagement. London and New York: Routledge.
  • N. Entwistle and P. Ramsden (1982) Understanding Student Learning. Beckenham: Croom Helm.
  • Zepke, N, Leach, L, and Butcher, P (2014) “Student engagement: students’ and teachers’ perceptions.” Higher Education Research & Development. 33:2. 386-398.

The texts are available in the “SoTL Network Session Papers” folder in Paperpile.


 

Note:  For our next meeting on the 19th April, we agreed to look at a different SoTL methodology to those we have already considered, with “Action Research” being proposed as a preferred option. If anyone has any particular interest, topic or specific publication they would like to recommend, do feel free to let us know either by emailing academic.practice@york.ac.uk or posting on our Google Community.

Previously suggested topics, not all of which are necessarily likely to yield an action research project, include: assessment (MCQs or something related to the York Pedagogy), Digital Literacy (Digital Natives/Immigrants), Learning/Curriculum Design (alignment), and Critical Pedagogy.

We have added folders in Paperpile for “Research Methods” and for “Critical Pedagogy.” We have also added a paper to the latter, which offers a broad introduction to the topic, as some in the Network expressed interest in finding out a little bit more about it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s