Omeka/Neatline Workshop Agenda and Links

We’ll be working with the NULab’s Omeka Test Site for this workshop. You should have received login instructions before the workshop. If not, let us know so we can add you.

Workshop Agenda

9:00-9:15 Coffee, breakfast, introductions
9:15-9:45 Omeka project considerations

9:45-10:30 The basics of adding items, collections, and exhibits
10:30-10:45 Break!
10:45-11:15 Group practice adding items, collections, and exhibits
11:15-12:00 Questions, concerns
12:00-1:30 LUNCH!
1:30-2:15 Georectifying historical maps with WorldMap Warp
2:15-3:00 The basics of Neatline
3:00-3:15 Break!
3:15-3:45 Group practice creating Neatline exhibits
3:45-4:00 Final questions, concerns
4:00-5:00 Unstructured work time

Sample Item Resources

Historical Map Resources

Omeka Tutorial

Neatline Tutorials

Model Neatline Exhibits

Representing the “Known Unknowns” in Humanities Visualizations

Note: If this topic interests, you should read Lauren Klein‘s recent article in American Literature, “The Image of Absence: Archival Silence, Data Visualization, and James Hemings,” which does far more justice to the topic than I do in my scant paragraphs here.

Pretty much every time I present the Viral Texts Project, the following exchange plays out. During my talk I will have said something like, “Using these methods we have uncovered more than 40,000 reprinted texts from the Library of Congress’ Chronicling America collection, many hundreds of which were widely reprinted—and most of which have not been discussed by scholars.” During the Q&A following the talk, a scholar will inevitably ask, “you realize you’re missing lots of newspapers (and/or lots of the texts that were reprinted), right?”

To which my first instinct is exasperation. Of course we’re missing lots of newspapers. The majority of C19 newspapers aren’t preserved anywhere, and the majority of archived newspapers aren’t digitized. But the ability to identify patterns across large sets of newspapers is, frankly, transformative. The newspapers that have been digitized under the Chronicling America banner are actually the product of many state-level digitization efforts, which means we’re able to study patterns across collections that were housed in many separate physical archives, providing a level of textual address not impossible, but very difficult in the physical archive. So my flip answer—which I never quite give—is “yes, we’re missing a lot. But 40,000 new texts is pretty great.”

But those questions do nag at me. In particular I’ve been thinking about how we might represent the “known unknowns” of our work,1 particularly in visualizations. I really started picking at this problem after discussing the Viral Texts work with a group of librarians. I was showing them this map,

which transposes a network graph of our data onto a map which merges census data from 1840 with the Newberry Library’s Atlas of Historical County Boundaries. One of the librarians was from New Hampshire, and she told me she was initially dismayed that there were no influential newspapers from New Hampshire, until she realized that our data doesn’t include any newspapers from New Hampshire, because that state has not yet contributed to Chronicling America. She suggested our maps would be vastly improved if we somehow indicated such gaps visually, rather than simply talking about them.

In the weeks since then, I’ve been experimenting with how to visualize those absences without overwhelming a map with symbology. The simplest solution, as almost always, appears to be the best.

In this map I’ve visualized the 50 reprintings we have identified of one text, a religious reflection by Nashville editor George D. Prentice, often titled “Eloquent Extract,” between the years 1836-1860. The county boundaries are historical, drawn from the Newberry Atlas, but I’ve overlain modern state boundaries with shading to indicate whether we have significant, scant, or no open-access historical newspaper data from those states. This is still a blunt instrument. Entire states are shaded, even when our coverage is geographically concentrated. For New York, for instance, we have data from a few NYC newspapers and magazines, but nothing yet from the north or west of the state.

Nevertheless, I’m happy with these maps as helping me begin to think through how I can represent the absences of the digital archives from which our project draws. And indeed, I’ve begun thinking about how such maps might help us agitate—in admittedly small ways—for increased digitization and data-level access for humanities projects.

This map, for instance, visualizes the 130 reprints of that same “Eloquent Extract” which we were able to identify searching across Chronicling America and a range of commercial periodicals archives (and huge thanks to project RA Peter Roby for keyword searching many archives in search of such examples). For me this map is both exciting and dispiriting, pointing to what could be possible for large-scale text mining projects while simultaneously emphasizing just how much we are missing when forced to work only with openly-available data. If we had access to a larger digitized cultural record we could do so much more. A part of me hopes that if scholars, librarians, and others see such maps they will advocate for increased access to historical materials in open collections. As I said in my talk at the recent C19 conference:

While the dream of archival completeness will always and forever elude us—and please do not mistake the digital for “the complete,” which it never has been and never will be—this map is to my mind nonetheless sad. Whether you consider yourself a “digital humanist” or not, and whether you ever plan to leverage the computational potential of historical databases, I would argue that the contours and content of our online archive should be important to you. Scholars self-consciously working in “digital humanities” and also those working in literature, history, and related fields should make themselves heard in conversations about what will become our digital, scholarly commons. The worst possible thing today would be for us to believe this problem is solved or beyond our influence.

In the meantime, though, we’re starting conversations with commercial archive providers to see if they would be willing to let us use their raw text data. I hope maps like this can help us demonstrate the value of such access, but we shall see how those conversations unfold.

I will continue thinking about how to better represent absence as the geospatial aspects of our project develop in the coming months. Indeed, the same questions arise in our network visualizations. Working with historical data means that we have far more missing nodes than many network scientists working, for instance, with modern social media data. Finding a way to represent missingness—the “known unknowns” of our work—seems like an essential humanities contribution to geospatial and network methodologies.

1. Yes, I’m borrowing a term from Donald Rumsfeld here, which seems like a useful term for thinking about archival gaps, while perhaps not such a useful term for thinking about starting a war. We can blame this on me watching an interview with Errol Morris about The Unknown Known on The Daily Show last night.

Creating a Historical Map with GIS

In the next few days I’ll be teaching a few workshops centered largely on teaching participants to georeference historical maps using ArcGIS. I’ll do this first at the Northeastern English Graduate Student Association’s 2013 Conference, /alt, and then at the Boston-Area Days of DH conference we’re hosting at the NULab March 18-19.

We’ll be learning a few things in this workshop:

  1. How to add base maps and other readily-importable data to ArcGIS
  2. How to plot events in ArcGIS using spreadsheet data
  3. How to georeference a historical map in ArcGIS

For that last goal, this step-by-step guide by Kelly Johnston should be your go-to reference. We’ll be following Kelly’s instructions almost to the letter, though we’ll be using different data.

We’ll be using these files for the lab. This tutorial, prepared for my graduate digital humanities class, walks through the same steps we’ll follow, in case you need to review a step here or later:

A few other worthwhile links:

  • The Spatial Humanities site is a useful clearinghouse of both spatial theory and praxis across a range of humanities fields. Kelly Johnston’s step-by-step above is only one of a growing collection of such resources on the Spatial site.
  • The David Rumsey Historical Map Collection. If you want a historical map with which to practice—or, frankly, for your research, this is an excellent first stop. In short, it’s many thousands of historical maps, provided for free. In order to download high-resolution versions of the maps, you must create a (free) account and log in.
  • Neatline is an incredibly robust Omeka plugin that allows you to create spatial exhibits of your collected materials. Check out some of the demos—it’s really phenomenal stuff. We won’t have time to go over Neatline, but one could, for instance, make use of a map georeferenced in ArcGIS as a base map for a Neatline exhibit.
  • Hypercities is another important spatial humanities platform that makes use of Google Earth and allows users to build “deep maps” of spatial data, historical maps, images, video, and text. Check out some of their collections to see what Hypercities can do. The collections around Los Angeles, Berlin, and Rome are particularly robust.

Finally, two spatial nonsequitors:

MLA 2012 Presentation: “Mapping the Antebellum Culture of Reprinting”

Below I’ve copied the (very rough) text of my talk at MLA 2012, as part of the Society for Textual Scholarship‘s “Text:Image – Visual Studies in the English Major” panel. You can download the accompanying slides here.

“Mapping the Antebellum Culture of Reprinting”

Today I want to talk about how mapping using global information systems (GIS) software might help us better understand the dynamic world of print culture in the United States before the Civil War—what Meredith McGill calls “the antebellum culture of reprinting.”

Continue reading

“The Celestial Railroad” and the 1861 Railroad

Cross-posted from my “Hawthorne’s Celestial Railroad: A Publication History” development blog at http://blog.celestialrailroad.org/2011/10/the-celestial-railroad-and-the-1861-railroad/

At this January’s MLA Convention, I’ll be presenting on The Society for Textual Scholarship‘s sponsored panel, Text:Image; Visual Studies in the English Major (viewing the panel description may require an MLA membership). I’ll discuss “Mapping the Antebellum Culture of Reprinting,” thinking through my experiments with GIS in the past few years, particularly since attending the GIS course at the Digital Humanities Summer Institute this past summer.

So I was thrilled this past week to read William G. Thomas’ talk, “What We Think We Will Build and What We Build in Digital Humanities,” from this year’s Nebraska Digital Workshop, and to learn from the talk about Thomas’ project, Railroads and the Making of Modern America. The project itself is fascinating, and I immediately wondered if some of their data might help me investigate the circulation of “The Celestial Railroad.” I’ve suspected for awhile that Hawthorne’s tale—which satirizes uncritical modernizing through the central image of a railroad—ironically may have spread around the country through the railroad system. Continue reading

David Rumsey’s Historical Maps in Google Earth

While preparing for this week’s Modern Language Association Convention in Los Angeles, I revisited the amazing digital collection of the David Rumsey Historical Map archive. This site provides digital copies of many of the 24,000 maps in the archive, even allowing visitors to download high-resolution files of them. I’ve used several of these maps of the United States in the late 1830s and 1840s to trace the spread of “The Celestial Railroad” across the country.

This week, however, I discovered that a number of the maps in the collection can be downloaded as a .kmz file to be viewed in Google Earth. Importing this file into Google Earth allows you to lay maps from the Rumsey collection over the Google Earth globe. These maps are georectified, meaning that the features on the maps have been lined up with their precise places on the more precise modern globe. Continue reading