New Publication: Measuring Literary Quality - Proxies and Perspectives
A recent study from CHC investigates how various indicators of literary quality interrelate, offering insights into how quantitative measures contribute to our understanding of 'quality' in literature
In their paper, “Measuring Literary Quality: Proxies and Perspectives”, researchers Pascale Feldkamp, Yuri Bizzoni, Mads Rosendahl Thomsen, and Kristoffer L. Nielbo combine literary theory with quantitative analysis.
The team examines the interplay of 14 different measures of literary quality across 9,000 novels published in the United States from the late 19th to the 20th century, creating a solid foundation for both temporal and numerical scope. They compare expert-based indicators of quality—such as the frequency of a novel’s inclusion in syllabi and anthologies—with crowd-based metrics like Goodreads ratings, aiming to find patterns that shed light on shared understanding of quality.
Research Assistant, Pascale Feldkamp at Center for Humanities Computing, emphasized the complexities of defining literary quality, stating
“When trying to define ‘quality’ in literature, we run into a problem where, on the one hand, reading is seen as so personal and intimate that no two readers might judge a book in quite the same way. Yet, on the other hand, some novels resonate through time, and, at a larger scale, readers seem to broadly align in their literary judgments. This suggests that there might be shared qualities or patterns behind what we think of as ‘quality’ in literature. We wanted to explore how different forms of quality in the literary field relate to each other.“
This dilemma prompted the team to explore whether our judgements of literary ‘quality’ stem from a common foundation or if they reflect distinct forms of quality. As Feldkamp explains:
Our question was whether different markers of ‘quality’ in the literary field all point to the same thing —overlapping indicators of one kind of ‘quality’—or whether they actually represent different forms of ‘quality’ altogether, and if so, how these forms might be characterized.
Overall, the study reveals two primary perceptions of quality: one associated with canonical literature and the other with popular literature. For instance, works that score high in expert evaluations (such as those frequently appearing in syllabi and anthologies) tend to receive lower ratings from general audiences on platforms like GoodReads, while award-nominated books are more commonly found in libraries.
Reflecting on the implications of their findings, Yuri Bizzoni, postdoc at the Center for Humanities Computing, notes:
The differences among these markers of quality are intriguing— for instance, while classics are frequently translated, they tend to be less popular in libraries. However, what surprised me most were the similarities between these markers, specifically how different markers of quality seem to cluster around two main types, even if the forces behind them were so different. On one side, publishers like Penguin Classics help define what counts as a ‘classic’, and on the other, ‘lay readers’ on Goodreads for what they consider as a classic – and they often agree, to some extent. Goodreads users are a very diverse population that doesn’t necessarily read Penguin anthologies or literary anthologies in general, and those same users might rate other books differently on another scale. Still, different types of reader-judges do seem to align when it comes to this ‘classics’ quality. It’s almost like an individual reader might be able to put on different lenses, judging a book from multiple perspectives.
These insights led the team to conclude that different markers of literary quality cluster around certain main forms. Consequently, it may be more accurate to speak of qualities, popularities, or successes in the plural rather than a singular concept when addressing a phenomenon as complex as literary appreciation. However, this perspective also acknowledges that literary judgment is not entirely subjective.
For further details, see Measuring Literary Quality: Proxies and Perspectives in The Journal of Computational Literary Studies (JCLS).