The Way We Judge Humanities Professors Is Broken—This Initiative Could Fix It
Professors of the humanities and social sciences are now evaluated according to criteria generally developed for the hard sciences or according to metrics which don’t fully capture the impact of scholarship—the number of citations a piece of writing gets, for example. And beyond this, there is the dictum which haunts even the most ivy-clad ivory tower: publish or perish.
“There’s no connection between what we measure and what we value,” said Christopher P. Long, a professor of philosophy at Michigan State University. “The key from my perspective as a dean and a scholar is that we need a more textured way to tell the story of the impact of our work.”
Long and a team of six other academics and researchers are developing a new set of metrics for professors in the humanities and social sciences widely. Though in its infancy, last month the initiative, titled the “Humane Metrics for the Humanities and Social Sciences,” or HuMetricsHSS, received a $309,000 grant from the Andrew W. Mellon Foundation.
Currently the metrics that impact an academic’s institutional standing—and yes, a professor’s standing on the tenure track—are blunt. Long points to a reliance on citations as one example of what’s broken. Since the more citations an article gets, the “better” it is, an academic might produce a piece of work that’s extremely provocative. That doesn’t mean the scholarship is valuable—it just might mean that the work made people upset. “That is a rudimentary way for measuring the impact of scholarship,” says Long.
HuMetricsHSS wants to develop a metrics framework from values. Currently, there are five listed on the initiative’s website: equity, openness, collegiality, quality, and community. Those are just broad starting points and are subject to changes and additions as the initiative develops further.
Still, one can already imagine how they might apply to the “publish or perish” dictum, which, as Long notes, really means publish in a select number of academic journals treated as being influential. “Part of what we want to do as an initiative is open up the range of recognized opportunities through which ideas can inform and transform the public,” said Long.
Using the Mellon grant, the HuMetricsHSS team will develop two pilots over the course of the next 18 months, one focused on annotations and the other on syllabi. In a blog post on the latter, HuMetricsHSS team member Nicky Agate cited Rebecca Kennison, also working on the project, as saying, “we want to argue that there’s much more to scholarship than publishing regularly in high-impact journals.” Developing metrics around syllabi can help boost practices that everyone in the arts generally claims to value. Syllabi, for instance, can be evaluated for questions of equity and openness—criteria which could benefit those those traditionally marginalized from the humanities canon.
Long explains that these new metrics would also allow a syllabus to be further analyzed—noting which other scholars are included or referenced on it, for instance. Long said this kind of evaluation can be opaque, but it contributes to a story scholars can tell about their work—like knowing how often they’re taught in schools—that is more rich and textured than the current metrics allow; for example, if a professor’s article isn’t cited frequently but still found its way onto syllabi across the country.
The initiative faces numerous challenges. The current metrics are deeply ingrained in the fabric of academia and, simultaneously, reviled to the point that the humanities have an “allergy” to talking about metrics reform, Long said. Getting over the skepticism and having support from other academics is a key goal of the HuMetrics initiative, along with developing a framework for different ways of evaluating the humanities and social sciences.
Broadly, applying metrics to the arts is a thorny topic. Some see the field as valuable exactly because it eschews measurable utility in favor of a more transcendental personal experience. In a larger sense, metrics can also be divisive: While museums don’t typically measure if a visitor has been moved by an exhibition, they do keep total visitor counts—which some argue are wrongly are seen as synonymous with success. And highly controversial quality metrics currently under development by the Arts Council England, meant to evaluate grantees, has many in the sector worried “funding decisions would be based on algorithms and boxes ticked,” writes Charlotte Higgins in The Guardian.
HuMetricsHSS will hopefully provide a different, more nuanced model that can be applied specifically to scholarship in the humanities writ large. For metrics to be meaningful and valuable in academia, Long argues that the people being measured need a say. “There hasn’t been an opportunity for scholars themselves to influence the structures of metrics,” he said.
Isaac Kaplan is an Associate Editor at Artsy.
See how Bombay Sapphire supports artistry.
Sponsored by Bombay Sapphire