Talking about Viral Texts Failures
A few weeks ago, Quinn Dombrowski posted the following thread about failure—or rather about the perpetual dialogue around failure—in the digital humanities community. The whole thread is worth reading in full, but I embed two salient moments here:
It's been 7 years since I gave the Project Bamboo talk at 2013, that resulted in the paper that everyone cites. I was pregnant at the time. That kid is now my kindergarten-age junior coworker. Lots of people cite that paper and talk about how we should talk about failure.
— Quinn Dombrowski (@quinnanya) May 28, 2020
So that's my provocation, particularly to senior scholars in the field. Let's stop talking about talking about failure. Start sharing your experiences personally. Or let's at least talk about what you're afraid of. What's stopping you? How can we change that? #DARIAHVX /end
— Quinn Dombrowski (@quinnanya) May 28, 2020
I’ve been fortunate to collaborate with Quinn a few times and we have long contemplated writing something together about the failure of DHCommons, though that also has not materialized.
I agree that while DH pervades with rhetoric about the need to embrace experimentation, iteration, and failure, for the most part we are no more likely to discuss our failures explicitly than scholars in other fields. The reasons for avoiding specifics are easy enough to understand: however much we claim to value experimentation and risk-taking, academia does not reward those experiments that do not pan out. Humanities journals are not replete with articles describing interpretations that don’t quite work or archival visits that just didn’t yield any interesting new insights. It’s hard to get a new grant based on a previous grant that failed to produce the expected deliverables.
I have in the past commented that the “excessive claims” that humanists sometimes mark as a negative trait in DH work are, to my mind, among the most humanistic aspects of DH. Humanists are trained to claim novelty and originality as the basis for our arguments (“previous scholarship has entirely overlooked the pervasive influence of…”), and those rhetorical tendencies creep into the rhetoric of project descriptions and grant proposals. Which is all to say, I found Quinn’s argument that DH has largely failed to discuss failure both true and salutary. As a now-tenured and otherwise privileged —in terms of personal identity and institution—faculty member in the field, I was personally convicted that I should be doing far more than talking about talking about failure. This post is my first foray into discussing a specific failure in my career.
I’m going to write about the database of nineteenth-century newspaper reprints that still doesn’t exist, seven years into the Viral Texts project. I will write about some of the things this failure has taught me, because I do believe the primary reason to normalize failure is that all scholars fail in small and large ways, and ideally learn from those failures. We make the process of scholarship more intimidating to newcomers and inculcate feelings of inadequacy when we publicly trumpet only our triumphs. However, in sharing what failure taught me, I will not try to spin a failure into a secret success. One of the central goals we articulated early in the Viral Texts project was that we would create a resource that non-technical scholars could use to explore reprinting, and today our data remain largely inaccessible to those without moderate technical proficiencies. That is not a secret success; it is a legitimate failure that I still hope to rectify.
The Viral Texts project was my first foray into computationally-intensive research: data mining for text across large collections (Note: if you’re coming to this post unaware of the VT project or its methods, I won’t spend any time unpacking them here; the project’s publications would provide a nice primer). My DH work prior to VT was largely archival, encoding digital editions of a particular reprinted text. And so I brought an archival or edition-building mindset to VT. Early in the project I imagined we would:
- Develop an algorithm to identify reprinted texts in newspapers and magazines
- Build a database to publish the reprints we identified
- Annotate the reprints we discovered, adding information about authorship, genre, publication venue, etc.
- Share our database with other scholars
During our prototype stage, generously funded by an NEH Tier 1 grant, we did just this, focusing on the pre-Civil-War newspapers in Chronicling America. Because none of us were database designers, a significant portion of our grant budget went to contracting a developer to create a database for us, which they did. The public results of that research were discussed in a pair of articles that appeared in American Literary History in 2015. Those articles both point to a database that now, only five years later, returns a 404 error. Technically, the database still exists, but due to a security issue it’s now restricted to approved researchers and must be accessed through the NU library, though there’s no way a reader of our articles would know that when they click the bad link.
So what happened?
- First, the number of reprints identified far outpaced our team’s ability to annotate them as I’d imagined. Even working with only pre-1861 newspapers from a single historical archive, we identified nearly 2 million distinct “clusters” of reprinted text. We were at the time a team of 4-5 researchers: two junior faculty members and 2-3 graduate research assistants. The idea that we would richly annotate even a meaningful subset of our results, in the way of a typical digital archives project, proved immediately absurd.
- Second, our computational methods continued to iterate. One of the great strengths of VT has been the deep and genuine interdisciplinary collaboration that underlies the project. However, for the project to continue to be both interesting and publishable for its computer science collaborators, we cannot develop a single reprint-detection method and call it done. Instead, we run an experiment, we evaluate the results, we talk about how to refine the methods, and we re-run. Each time we get a slightly different dataset testifying to nineteenth-century reprinting practices. But this iteration makes it especially challenging to freeze our data at any one moment for representation in a static database. I have learned a good deal working on this project about iterative, data-driven research, which had I fully understood earlier might have led me to plan its outputs very differently. My edition-building perspective was not ready for the reality of CS research at all.
- Third, and relatedly, the project data very quickly broke the database. Essentially, as soon as we expanded our experiments to include the full nineteenth-century (and even before we brought new data sources into conversation with Chronicling America, as we’ve done since) we found ourselves with tens of millions of reprint clusters, and the database architecture built under the original grant was not able to support the size of the data. By this time our grant was ended and we did not have funds to contract for database improvements. Our team remained unskilled at database design as shockingly, the demands of being junior faculty had not provided me with time to also become a database architect.
- We did secure follow-on grants for aspects of the VT project—or projects to which VT contributed, without being the primary focus—but none centered the database problem, which, to be frank, I kept pushing to the back of my mind as something we would solve soon. What “soon” meant in my mind, or by what means I thought we would solve this thorny problem, I’m not sure, but for too long it never felt like the top priority for the project, however much the lack of a public database bothered me. This wasn’t a scholarly failure so much as mental one; I just kept putting it off because it was too hard to face, and the longer I put it off the harder it was to face.
I wish the explanations for this failure were more complex, but they are not. VT was my first major grant, and my inexperience planning and implementing a major project no doubt came into play, at least in the initial stages of the project when better foundations could have been laid. Were I rewriting that first, establishing grant with the hindsight I have now, I would have organized things differently to ensure we had a simple, expandable infrastructure better prepared for the iterative project coming.
As the project has progressed, we have worked to always publish code and the data on which our publications’ arguments are based. We also continue to work on a database infrastructure that will support our forthcoming book project, but this required a wholesale reimagining of how texts relate to one another and should be represented in digital space. I am glad this work is finally maturing, and—to cite another positive outcome—these challenges have forced me to reckon with ideas of “the text” in ways I never anticipated starting out. I am a better bibliographer as a result of this failed database.
But that is still not to say this failed database was a secret success. I always keep in my mind a prototypical nineteenth-century scholar that I hope our project can address: someone who does incredible archival work and is not hostile to digital methods, though they don’t employ digital methods in their own work. Someone who wants to learn more about how a particular set of texts, or a particular author’s work, were shared in nineteenth-century newspapers and hopes to use our findings to learn more. Someone, however, who would encounter gigabytes of CSV files and have no idea how to begin sifting through them to find the texts they are interested in. For that scholar, our project is thus far a failure, and I am not content with that reality.