Measuring Student Engagement: Proceeding with Caution
As APRIL moves into conducting formal research related to teaching and learning in the Faculty of Arts, considerations of what we are measuring, how, and why become important. Thus far, we have used student grades to look at the impact of a “flipped classroom” model on student learning. But grades alone don’t tell us some things, which we might like to know. For example, did students gain a deeper understanding of the course content? Did they gain a greater interest in, or at least respect for the field they were studying? Did they enjoy and value their learning? Was it meaningful to them?
All of these questions can be considered part of a larger construct called “student engagement.” Engagement is a big deal in both public (K-12) and post-secondary education. The University of Alberta participates in the US developed National Study of Student Engagement (NSSE) – a survey administered to undergraduate students to measure the extent to which students are engaging in behaviours found to correlate with academic success. The U of A Colloquay Blog offers a good layperson’s overview of the NSSE here.
Seems like a good idea. Who doesn’t want engaged students? Yet the use of data to measure student engagement ought to come with some cautionary notes. Zepke (2014) provides an excellent, descriptive overview of key constructs and measurement tools for student engagement, but this is in the interest of asking his critical questions about “big picture” stuff that may be overlooked when focusing on the technical task of measurement.
First, Zepke points to criticisms that large scale surveys are too generic and behaviourally focused. Here’s a good example: One indicator of engagement from the NSSE is that a student asks questions in class. That’s an observable and measurable behaviour, right? But if you’ve taught even only a couple of times you quickly realize that students can be interested and engaged without breathing a word in class. Further – and this is a growing issue as more international students join our campus – potential cultural biases must be considered. Some cultures are characterized by greater reticence than others (Hofstede, 2011). Critics also point out that these tests “shape” learners toward the same end: “The meaning of success,” says Zepke, “tends to be connected to completion of programmes and paid jobs,” and thus overshadows other potential goals of education (2014, p. 701).
Zepke also expresses a concern that measuring engagement can feed into a larger fixation on pedagogy – the “how” of our teaching – at the expense of curricular questions. What are we teaching? Why are we teaching it? Perhaps most importantly, whose interests are served by what we teach? Who sets the agenda? It is fairly easy to sidestep these questions by protecting “academic freedom” in the contexts of our own classroom, but where are our wider discourses? Zepke may be correct that pedagogy has colonized curriculum. One could argue, further, that isolated teaching and research practices, in the Humanities in particular (Creamer, 2003) weaken our capacity to voice why our disciplines matter beyond relevant “employability skills” (see O’Byrne & Bond, 2014).
Finally, Zepke examines the measurement of student engagement in the contexts of neo-liberalism, which is characterized by the commodification of knowledge, and accompanied by an “audit culture.” Surveillance by “big data” creates wariness, mistrust and conservatism – about the last things we need in institutions dedicated to expanding minds.
Zepke is not alarmist here, and I do not mean to be either. NSSE and other similar surveys are tools, and like any tools, can be used well or used crudely, and to better or worse ends. Aggregate data from NSSE has been used in the past to pinpoint “trouble spots” for student retention, evaluate improvement initiatives in student services, work toward greater responsiveness to student diversity, and many other efforts to make universities and colleges better places for students to be.
The important thing is to use data wisely. For APRIL, this means the use of “hard” measures like academic performance and valid engagement interests alongside qualitative methods that capture the subjective experiences of students and instructors. For many of us, the inclusion of qualitative data needs no justification, but as Creamer (2003) notes, different disciplinary areas are grounded in very different paradigms. It was interesting, for example, to read Ives, McAlpine & Gandell’s (2009) account of teaming with engineering faculty in a research effort. In this context, team members had to be persuaded that something other than a classic experimental design was needed to evaluate teaching initiatives.
The anecdote points to the need to insist that we do not lose the relational dimensions of teaching and learning in our quest to isolate and measure this complex phenomena of “student engagement.” While there are many good reasons to pursue and heed evidence of the classroom practices and conditions that help student to learn, these are the means, and not the ends of our efforts. If the Humanities have a unique role to play in post-secondary pedagogy, perhaps it is in part that of keeping the humanity in our teaching.
Creamer, E. G. (2003). Exploring the link between inquiry paradigm and the process of collaboration. The Review of Higher Education, 26(4), 447–465. doi:10.1353/rhe.2003.0012
Hofstede, G. (2011). Dimensionalizing cultures: The Hofstede model in context. Online Readings in Psychology and Culture, 2(1). http://dx.doi.org/10.9707/2307-0919.1014
Ives, C., McAlpine, L., & Gandell, T. (2009). A systematic approach to evaluating teaching and learning initiatives in post-secondary education. Canadian Journal of Higher Education, 39(2), 45–76.
O’Byrne, D., & Bond, C. (2014). Back to the future: the idea of a university revisited. Journal of Higher Education Policy and Management, 36(6), 571–584. doi:10.1080/1360080X.2014.957888
Zepke, N. (2014). Student engagement research in higher education: questioning an academic orthodoxy. Teaching in Higher Education, 19(6), 697–708. doi:10.1080/13562517.2014.901956