How would you define what an essay is? Do disciplinary essays share sufficient characteristics to warrant the common name for this type of assessed writing? How would you define the writing style and standard expected of students producing a report for laboratory work or projects in the Physical Sciences or Mathematics? What are we asking of students when we require them to write for a “non-academic” audience? How would you describe “good” writing in your discipline?
There have been repeated alarmist cries of a decline in writing standards across all educational levels in recent decades. At the same time, there has been a rise in writing support services in Universities, which have until recently operated on a remedial support model aimed at struggling students. Operators of these writing services frequently note, however, that it is not the struggling students but the ambitious and able ones who are most likely to come to them for support. Recent trends in research on student writing in Higher Education have consequently begun to acknowledge a more complex landscape to the challenges all students face with their writing; proposing that the issue is not necessarily a decline of standards so much as a case of students being confronted with a “hidden curriculum” of implicit writing norms, styles and standards that are never clearly articulated. Furthermore, rather than assuming some objective, universal standard of good writing, they argue that we need to recognise that disciplinary writing and thinking are wholly intertwined. We are not just introducing our students to disciplinary knowledge, we are also inculcating them into disciplinary languages and modes of expression. The research also challenges the common belief, particularly in textbooks, that the quality and standard of students’ writing matters only to Arts and Humanities disciplines that commonly assess through essays. Indeed, as the chapter for this week notes, Engineering students can be required to produce up to fifteen different “genres” of writing across their degree programme.
Nesi and Gardner’s research on student writing “genres” has emerged from their work with The British Academic Written English Corpus (BAWE), which is a repository of almost 3,000 written assignments from across the disciplines and from four levels of study (first year undergraduate to Masters). These assignments were collected from a number of UK Universities between 2004 and 2007. All of them are genuine assessments that had been awarded a grade of 60 or above (2:1/1st or Merit/Distinction). Where Nesi and Gardner’s work differs from other research on student writing is in their analysis of actual student work in order to define and interrogate commonly used terms in the UK for genre forms (essay, case study, research report, lab report, etc). Previous studies on student writing have depended on surveys and data analyses of forms of assessment, and had thus never questioned the use of, and assumptions underpinning, these genre terms.
- Prior to reading this chapter, did you have a clear sense in your mind of the norms and conventions for student writing in your discipline? Do you feel there is a consensus on what these are amongst your colleagues?
- Do you find any of Nesi and Gardner’s genre categories surprising, particularly in terms of how they cross disciplinary boundaries (e.g.“Empathy Writing” emerging as a genre category in Mathematics, or “challenge” essays being a rare occurrence in most Arts and Humanities disciplines)?
- Which of Nesi and Gardner’s writing genres are required of assessments in any of the modules or programmes you teach on? Are some genres more common than others in your discipline? Have any of them been recently introduced, and if so, to what purpose?
- On the basis of this chapter, why might students struggle with understanding what writing standard is expected of them?
- What impact might Nesi and Gardner’s potentially more accurate set of definitions of student writing genres have on the design of assessment, the curriculum overall and/or support for student writing (including embedding writing skills into the curriculum)?
Our session began with questions of value: opening thoughts on the chapter acknowledged its interest, while also asking what we could usefully draw from it. The discussion thus tended towards considering the practical implications of this work for us in supporting students: what sorts of interventions can we make with the genres mapped out by the piece, its conclusions, or indeed the corpus from which these results are drawn?
There was the general proposition that thinking about assessment genre in this way can be very useful for dealing with students. This might be, for example, when thinking through a particular assessment which may to us seem familiar and straightforward in its structures and intellectual demands, but which in fact demands some explanation and discussion. Sometimes this is a result of the genre’s very familiarity: students may think they know what is expected in a certain assessment, but in fact miss a fundamental property of the genre. Or it might be when presenting students with less familiar assessments – one group member described the blank faces which met an explanation of a literature review. It was felt that, potentially, both the work on the corpus and the corpus itself could be considerably helpful in teasing out the purposes, requirements and expectations of such assessments. On the other hand, there were those who felt they didn’t find the piece so immediately useful at face value, who had a sense that it told us things we already know. One argument here was that the complexity of the work obscures its usefulness, and that developing and deploying genre explanations would be better served by a common sense approach rather than the exposition of a large and at times slightly fuzzy framework.
From a more disciplinary-specific point of view, there were also group members whose research areas already involved the BAWE corpus. It is worth acknowledging that unlike some previous pieces discussed in our group meetings, this was very rooted in a specific discipline, linguistics, and its particular forms. The piece under discussion was written for its audience, authored by writers working within the EAP (English for academic purposes) tradition. There was detail to be uncovered here, and usefulness in terms of methods for preparing students (for example in pre-sessional programmes) for some of the expectations of writing in various genres. Thus the piece gives insight not just in terms of awareness of genres, but also offers thoughts on how to use the language of those genres. There was also the possible use of the corpus as a research tool to show to students, or from which to draw exemplars across multiple subjects – something that hadn’t existed in such breadth previously. Indeed, the whole approach is grounded in student work, where previous move analysis had been conducted largely on research articles; in that sense it breaks from previous research both in its empirical base and in the sorts of conclusions that can be drawn.
One noted eye-opener in the chapter was that it drew attention to the variety of assessment genres experienced by students in some disciplinary areas, the many different things they have to learn to do well, versus the relative lack of variety experienced by students in other areas. For example, students of Engineering would likely experience seventeen different assessment formats during their time with us. There were of course reasons for the presence of this range in the discipline, some internal to the nature of the subject and some stemming from external pressures to produce students with capability in particular areas, e.g. writing for public impact and for non-technical audiences. But talking to students in real depth about how they should approach assessment is, it was felt, somewhat less common in the Sciences than in other subjects – which could, of course, compound the issue. On the Arts side, of course, a natural reply from a discipline more heavy in one assessment type (for example the essay in English, Philosophy or Politics) would be that great variety can occur within one assessment genre (a point picked up by the chapter) and indeed that there is something to be pursued through constant practice.
Indeed, practice was a word repeated several times in our meeting – and if practice might not make perfect, it was felt that practicing the expected writing of a particular genre could at least improve students’ work. One group member provided a useful musical metaphor here: one would not expect a student to be able to perform a complex piece of music without years of effort in an instrument. Similarly, one ought not to expect a student to be able to produce a complex piece of writing without significant practice in the instrumentation of that writing. There were those who felt that as academics we often say to students that the best way to learn is by reading others, without saying a great deal else about what the student can do themselves with their own work. It was felt to be quite common even at master’s level to hear lecturers repeating mantras along the lines of “read these articles and you’ll learn how to write.” Of course, such advice can lead to emulation and mimicry rather than to students working for themselves. Embedding opportunities to practice without undue pressure, to internalise rich feedback on form and content, seemed to be regarded as the way forward, even as it was acknowledged that this was more difficult and time-consuming work.
As something of a comparison point, we had the advantage in our discussions of those group members who had experienced higher education in the US, where composition and rhetoric courses are a more usual part of most curricula and receive a fair amount of investment. Even there, though, there was a sense that the work was not properly and successfully embedded into the full curriculum: writing courses generally occur early in the degree and tend most often to be run by graduate students supervised by junior faculty. Sometimes the quality of this teaching is excellent, sometimes rather less so. This feeds a range of student abilities into the later years of degree courses, much like the UK picture, with similar attendant issues of how best to support such a range of capabilities. Indeed, it was noted that in some cases a writing course can have a negative impact: composition led by those from a background of creative writing or journalism can teach certain genre values (for e.g. an impassioned register or persuasive tone) which may well be entirely inappropriate to, say, a scientifically reasoned and dispassionately deliberative lab report. Some colleagues present had found research report writing on the part of students to be particularly poor, and the BAWE work gave helpful insight into some of the potential reasons for this; in this sense the research was regarded as very useful indeed.
Subject and Subjectivity
What is it that we want to see in student writing? As participants in our session averred, we can think it straightforward to say (for example) ‘write me an essay’ to a student, but in fact this is not so straightforward a thing at all. Similarly, in setting questions we can effectively say ‘write me an essay on x’ thinking x is the complex and time-consuming part, when a student may find the rest of the clause to be the difficult bit. Worse, we can think that x is the part we are consciously and consistently marking, when this may not always be exclusively the case.
More complicatedly, there is the issue that we ourselves do not always agree on what we’re looking for in a given genre. To take one example from our session, we might like to read an essay with plenty of signposting: the times when an argument points out what it will do, when and why. If we do like this, it might simply be because we find such stylistic features helpful to the reader (particularly if said reader has a hundred very similar arguments to read in a short time frame, and occasionally sleeps). We might go further, finding such elements to be scholarly good practice, an index of authorial control and direction. Or, at the other end of the scale, we might find these moments to be an interruption, awkward affectation, or otherwise a turn-off. We might just not notice it, or not really care. Staff within the same subject area, in the same corridor, can expect different things from student assignments. I think we all acknowledged this basic truth, but is it a problem?
From some corners, the answer was a resounding ‘not really.’ All teachers are subjective – the effects of assessment criteria, blind double marking and external examiners only go so far. Honesty about that with students can help, and encouraging students to ask their markers what is expected of them can be excellent practice for graduate life. Nobody, after all, successfully submits a real piece of writing without checking what their audience is looking for. However, there were also problems raised with this, not least the fact that a given lecturer is not always the one doing the marking. Others pointed out that giving contradicting information around assessment can be problematic, making it harder for students to think about similarities by talking about the differences. Meanwhile, working with groups of staff to arrive at a common understanding of what is expected can be very useful, but difficult.
The problem, it was felt, is not so much that students are, at some point or other, taught the ‘wrong’ thing or given the wrong advice, but that they reuse it. Students (indeed, all of us) can tend to transfer structural knowledge from one assessment to another – with mixed results. This can be a real issue if it goes unnoticed or unaddressed. However, if teaching one right way can be problematic down the line, we need to think carefully about precisely how we teach writing skills overall. Embedding practice with writing into the teaching of students can lead to the curriculum becoming packed, particularly with curricula that are already rather weighed down (e.g. master’s programmes). What content do you lose to tackle academic writing? Or is it, rather, part and parcel of teaching such content to help students develop and reproduce good ‘habits’ of writing?
Of course, developing good work can also mean understanding poor work. Here was one opportunity, it was felt, for the corpus to be usefully expanded. The BAWE corpus is composed of good (upper second and first class) work rather than poorer work; while this aims to show what a successful student writes like, it is difficult to understand without the ability to compare less successful or unsuccessful writing, or to access the thoughts of those marking the work in the first place. Of course, it would likely be much harder to convince students to offer their worst work for public view – and this thought drew us back to previous conversations about the difficulty of encouraging students to work productively with failure and to feel ‘allowed’ to really encounter difficulty and learn from mistakes.
As a final thought, then, we noted that any corpus of writing will necessarily have to change over time. Genres aren’t fixed; they’re flexible and can be challenged and changed. Perhaps we need to give the same ability to challenge and change to our students – to give time to fail, understand and improve, rather than assessment remaining a do or die proposition.
Nesi H. and Gardner S (2012) “Families of genres and assessed writing” Genres Across the Disciplines: Student Writing in Higher Education. Cambridge: Cambridge University Press.
The chapter is available in Paperpile