Martin Paul Eve bio photo

Martin Paul Eve

Professor of Literature, Technology and Publishing at Birkbeck, University of London

Email Books Twitter Github Stackoverflow MLA CORE Institutional Repo ORCID ID  ORCID iD Wikipedia Pictures for Re-Use

This post is part of an ongoing series where I intend to develop my full personal (not institutional) response to the HE Green Paper. Comments are welcome to refine this.

The Green Paper asks in Question 10:

Do you agree with the focus on teaching quality, learning environment, student outcomes and learning gain? Please give reasons for your answer.

Beyond the obvious lack of detail here on how any of these components can be meaningfully measured, there are substantial problems with the proposed areas of focus for a proposed TEF.

On measurement, the Green Paper notes that “Because there is no single direct measure of teaching excellence, we will need to rely on proxy information”. Yet, at the same time, the Green Paper notes that “there have only been imperfect proxy measures”. Even before the technical consultation has been conducted, the Green Paper is overly confident that it will be able to design an exercise where well-funded private companies with vast datasets have failed.

On “teaching quality”: the proposed areas of measurement, such as “Students are intellectually stimulated, actively engaged in their learning, and satisfied with the quality of teaching and learning”, are similar to those specified in the National Student Survey. This usually relies on student assessment of an institution. Yet this kind of exercise is deeply flawed. Institutions game the survey responses through incentivization and arguments such as “if you rate us badly, your degree will be worth less as an employability criterion because the institution’s reputation will be lower in the future”. On the other hand, many students do not want to see future cohorts subjected to fee increases and so may rate institutions lower so that they cannot raise fees. Furthermore, most students have no experience of any other institution and so assessments are not comparable. The results are, rather, just a sense of how a student feels he or she has been treated, which is more akin to “happiness” than any measure of whether the teaching was good.

Furthermore, and extremely importantly, these arguments have recently been validated by two independent studies. These studies – Carrell, Scott E., and James E. West, ‘Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors’, Journal of Political Economy, 118 (2010), 409–32 and Braga, Michela, Marco Paccagnella, and Michele Pellizzari, ‘Evaluating Students’ Evaluations of Professors’, Economics of Education Review, 41 (2014), 71–88 – showed that students evaluate their teachers based on how well they did on the course, rather than how well they were taught. In other words, students often believe that if they receive a bad grade, it is the fault of the teacher, rather than the fact that they have not learned well. Appraising teaching quality through such assessments will lead to grade inflation, since teachers will be incentivized to ensure that their institution continues to receive funding. A further study – Bjork, Robert A., John Dunlosky, and Nate Kornell, ‘Self-Regulated Learning: Beliefs, Techniques, and Illusions’, Annual Review of Psychology, 64 (2013), 417–44 – showed that students believe that the easier a related task is (such as listening to a lecture) the more they think they have learned, when it may actually be that harder tasks help students to learn better. A good summary of these studies can be read at Poropat, Arthur, ‘Students Don’t Know What’s Best for Their Own Learning’, The Conversation [accessed 30 November 2015].

Measuring these types of elements, then, is hard, if not impossible. You can also, of course, have appraisers in a room with teachers, but this tends to affect the teaching itself, thereby defeating the point of the exercise. I am also not clear on how individual staff (those who actually do the teaching and can make a difference) will feel about a TEF score and their contribution towards it. Staff who go the extra mile but then see their institutions punished are unlikely to maintain high standards as they will be demotivated.

On “Learning environment”: many of these proposed criteria are attacking straw figures. For instance, “The provider recognises and rewards excellent teaching through parity of status between teaching and research careers, and explicit career path and other rewards”. This already happens. See, for example, the promotion criteria at the University of London that permit Professorial status on the basis of “excellent teaching” alongside acceptable performance in research. That said, I was very pleased to see a focus on “The relationship and mutual benefits between teaching, scholarship and research”. If this is lost, then HE is in trouble. It is vital that research and teaching not be tracked into different areas at institutions.

On “Student outcomes and learning gain”: I am unclear what the proposed items of measurement include here. For example: “students get added value from their studies”. What does this mean beyond the specific areas identified? There is also insufficient focus here on the social benefits of an educated population in favour of short-term thinking on skills: “Students’ knowledge, skills and career readiness are enhanced by their education”. While there is a passing mention of “educational […] goals” it is not clear that these should always and in every case be linked to and measured by employment. It is clear that we need people to know about history and literature for the good of society. Yet this seems to be elided here in favour of crass metrics of whether degrees provide training in the present. There should be more acknowledgement that universities are not simply “skilling” engines for business, even though universities have long recognised that they have a duty to help students prepare for future careers. It is also worth noting that many mature and/or part-time students do not (re-)enter HE for any kind of employment gain, but rather because they are interested. The motivations of students from diverse backgrounds must be factored in to any measurement of outcomes, since without knowing this one is making unfounded assumptions about what students want.