What happens when we systematically measure the quality of knowledge in academic settings? For thousands of scientists across the United Kingdom, the answer is dishearteningly close to their hearts: measured and quantified knowledge becomes more similar and homogeneous, eroding the organizational and epistemic diversity of their fields over time.
This is what I find in my recently published book The Quantified Scholar: How Research Evaluations Transformed the British Social Sciences. Focusing on the evolution of four disciplines (anthropology, economics, political science, and sociology) I find that a specific way of measuring the quality of knowledge—periodic country-wide assessments of Britain’s public universities—has resulted in more homogeneous disciplines, both in terms of their organization and their content.
A very british case
To approach the question of knowledge’s reactivity to measurement, The Quantified Scholar looks at a paradigmatic case. Since the mid 1980s, the UK government has sought to assess the quality of the research conducted in its public universities to determine how to allocate funding across the more than 100 institutions that make up British higher education.
Predicated on an argument of thoughtful and efficient spending, governments from Margaret Thatcher’s to Boris Johnson’s have asked academics across all fields of knowledge to show their best work and evaluate its quality. Roughly every five years, and through the careful curation of their employing institutions, academics submit their work to panels of expert peers who read and score their articles, chapters, books, artworks, exhibits, and other intellectual products (known as “outputs”). In this process, each output is given a grade from 0 to 4, the highest score representing research of leading international stature.
Grades are doubly meaningful: in addition to signaling a particular type of scholarly contributions, they are used by national governments in Britain to distribute funding. Departments only receive so-called quality related funding for their highest-scored outputs, creating clear incentives for submitting the best works of their academics to the evaluation.
Known under different names—Research Assessment Exercise initially, Research Excellence Framework more recently—these evaluations amount to collective exercises in measuring the quality of scientific knowledge in Britain. Backed by peer-review panels and complicated mechanisms for guaranteeing the robustness of the scores, the evaluations serve as a ruler, of sorts, through which knowledge is measured and rewarded. What do these rulers do?
How I studied academics
The Quantified Scholar combines computational methods with oral histories to make sense of how this family of evaluations transformed British social sciences. The specific combination, which I call the extended computational case method, iterates between the findings from computationally-informed analyses and conversations with British academics with direct experience of these intense evaluations. Through this process of iteration, findings from one set of methods (such as statistical models of labor mobility or computational analyses of thematic diversity across institutions) structure and inform conversations with informants, which themselves elicit new analytical decisions and rounds.
Using this iterative strategy, The Quantified Scholar finds both the epistemic patterns and organizational mechanisms associated to the implementation of the evaluations.
For example, by constructing a dataset that contains the institutional affiliations of more than 14,000 academics active in the UK from 1980 to 2018, computational methods allowed observing how movements of scholars across institutions were associated to both specific predictors such as gender and productivity as well as long term changes in their fields of practice. Science is, at the end of the day, a labor tied to our bodies. As we travel across institutional spaces, we minutely change the distribution of knowledge. Studying mobility gives us a clue into how evaluations might have had an impact on social scientific knowledge in Britain.
Academic mobility shapes the organization of science
This data is complemented with information on the “textual” structure of the British social sciences over time. Using a computational representation of the outputs of British academics based on so-called topic models allowed showing that patterns of mobility were distinctly associated to “epistemic” variables (that is, to what scholars wrote). Combined with the mobility data, these variables show that, throughout the evaluations, scholars who wrote things similar to those of their immediate colleagues were more likely to change jobs than those who were more distinct. Likewise, this data allows determining that scholars located at institutions that, in terms of their scholarship, stood far from the discipline’s canon were also more likely to leave their posts than their peers.
This pattern of structured movements across the employment space (of epistemic sorting, as I refer to it) had distinct effects. Over time, it moved scholars in ways that made the social sciences more organizationally homogeneous. Departments and units began to look more like each other, converging on an assumed canon of what each discipline should look like. This had clear consequences: specialist institutions became rarer, with universities converging to roughly similar ways of organizing their communities, making it less likely to work outside of the field’s epistemic core.
These results address patterns of mobility and organizational change but not their underlying mechanisms. How did this sorting work? And how was it experienced by British scholars? Like other forms of knowledge, science is shaped by the settings it inhabits. As a collective product of labor, science not only reflects the constraints of instruments, measurement, and uncertainty, but also the conditions and changes of our lives. Oral histories allowed examining these conditions and changes to make sense of the reasons why movements of scholars across
Local conditions matter
Behind these patterns of mobility, scholars highlighted various ways that evaluations nudge them to produce work deemed by their peers as more consistent with their discipline. In their organizations, their value becomes tied to notions of being adequate contributors to the evaluation, with internal “mock” exercises (that is, simulations of the actual evaluation) serving as ways of approximating their potential value. In some settings, these simulations are explicitly part of the merit reviews of scholars, with poor outcomes translated into delayed promotions or—more radically—changes in their contracts.
More broadly, however, the evaluations reinforce specific hierarchies of value in knowledge. In aiming for work that is “world-leading in originality, significance and rigor”, the evaluations privilege certain kinds of research, publication venues, and formats. With the incentives faced by institutions to obtain the best possible outcomes, this leads to an environment that pressurizes academics in specific ways.
While The Quantified Scholar is certainly a study of how knowledge changes in response to its measurement, it also offers a critical reflection on what we can do as academics to avoid the negative impacts of evaluations in our lives. At the core, the evaluations that I studied are a collective problem founded on shared hierarchies of value that we, as academics who build and reproduce knowledge in our research and teaching, actively police and maintain. Rather than placing fault on measurement and quantification, The Quantified Scholar asks us to reflect on the everyday practices of evaluation that we deploy in our academic lives. These are, after all, under our partial control, and it is by rethinking and redirecting them that we might be able to build a profession based on solidarity instead of one organized around individualized notions of merit and excellence.
Read more
Juan Pablo Pardo-Guerra. 2022. The Quantified Scholar: How Research Evaluations Transformed the British Social Sciences. Columbia University Press.
No Comments