Yesterday I posted on Twitter “Sometimes the central message of my classes is simply “technology” ≠ “internet technologies.”

As I noted in the short thread, that message is important to me because when students recognize the historical contingencies of current technology, that recognition can also spark more capacious technological imaginations, not limited by the idea that the specific corporate products created by Silicon Valley, over the past few decades, make up the sum total of human technological endeavor or possibility. Those broader imaginations, in turn, can better envision alternative technological futures.

The Exponential Fuzziness of Historical Perception

There’s a specific in-class activity that I have found productive for encouraging this kind of thinking, and I realized today that, since I’m teaching asynchronously this term, I failed to translate this in-class discussion prompt to an async assignment. I’m going to try to do that now, in large part because, despite our readings working to expand the timeline of “technology” in students’ imaginations, their first papers tended to use “technology” as a synonym for “very recent technology.” This is a hard habit to break. Douglas Adams wasn’t wrong when in 1999 he wrote these rules for how we think about technology:

1) everything that’s already in the world when you’re born is just normal;
2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;
3) anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.

I honestly don’t love the ageism of this quote, nor do I think it quite gets at the distinction between using and understanding a particular technology. As I’ve written before, notions of “digital natives” can be really damaging to pedagogy, particularly in the ways such phrases obscure disparities of access to both computers and digital skills training. But there is some truth in this quote about how we tend to perceive technologies qua technology.

A quick program note: what follows builds on assumptions about the general public’s perceptions of history. I am very aware that many colleagues, including, well, myself and others who may be reading this post, have expertise that lends them more precise and granular views of more temporally distant eras. The “our” below is neither a prescriptive nor an ideal “our,” but instead a very messy attempt to capture popular perceptions of history through an imprecise collective pronoun.

Early in most semesters I talk with students about the imprecision of historical perception, which seems to grow exponentially fuzzier as we look further back in time. Our sense of time that passed during our lifetimes is quite precise and granular: we can look at an outfit in a photo and guess when the photo was taken by the year, maybe by a particular season in a particular year. A song reminds us of a particular summer, and we understand that it was cliché by the next summer. As we look back farther, however, our perception blurs to the decade (e.g. the 40s, the 50s, the 60s), then periods (e.g. Victorian, Regency), then centuries (e.g. the 18th century), then eras (e.g. medieval, antiquity). Likewise with very recent technology, we have a granular sense of change and innovation, such that the distinction between generations of a specific phone model seem meaningful, while we look back and see single devices (e.g. the printing press, the telegraph) as metonymies for the technologies of decades, centuries, even eras. We don’t tend to consider the thousands of models and variations of, say, printing presses as “rapidly-shifting technology” but we do think of bi-yearly OS updates in that way.

I don’t want to flatten historical differences through these discussions, or to pretend that technological change was experienced in precisely the same way in 1821 as it is in 2021. There’s a strong argument to be made that the technology available to most people today is in fact shifting more rapidly than it has in most prior historical moments. But even that observation points to why I like the exercise below. Students often see today’s rapid, consumer-driven, technological iteration as inevitable—a law of nature, almost, rather than as a series of choices we have made collectively. There’s a despair linked to that attitude, because if rapid, consumer-driven, technological change is inevitable, then so too are its massive social, environmental, and political consequences. One primary reason to encourage historical perspective is to also break that sense of inevitability.

The conversation above typically happens alongside our first readings in media theory, where we’ve defined a medium, in a McLuhanian sense, as something that extends human senses. So writing, considered as a medium or a technology, extends the memory, by allowing us to commit thoughts to an external storage device, and it extends the range of the voice, by allowing our ideas to be carried, literally, farther than we could project sound from our vocal chords. There is of course more recent work that nuances or challenges this framework in productive ways, and we read that work later in these classes. But this a useful initial framework for helping students expand their notion of technology, because we can focus on precisely what sense is being extended by a particular medium and home in, just at first, on what the messages of particular media might be when considered through such a lens.

400 Years in the Future

Bringing the two threads above together, I break the class into small groups and ask them to imagine themselves as students in a similar course 400 years in the future—so if I were teaching this today, I would say “imagine yourselves as students taking a similar course to this in 2421.” I note that there’s a fundamental optimism to this exercise, presuming as it does that there are students meeting in 2421, and that they are still studying the history of media. And then I ask them to imagine what single technological change they might describe as central to the early twenty-first century. I note that such students are unlikely to recognize or care about the slight resolution increases between particular camera models in our mobile phones, or the ultimately modest differences in speed between generations of wireless technology. Instead, I encourage them to consider what technology, or class of technologies, has extend our senses in ways that might feel historically significant when viewed from such a temporal distance.

As they discuss, I circulate and ask followup questions to help them think through historical nuances they might not have considered. One common answer, for instance, is that “instant communication” is the fundamental technological shift of the early twenty-first century. Here, however, I might encourage them to consider whether our version of “instant communication” would seem substantially distinct, at a perceptual distance of 400 years, from that established by the telegraph, the telephone, or the radio. Human beings have been able to communicate nearly instantly, and over long distances, for at least 150 years, though it’s certainly become more widely available and convenient during that time. Others will point to the automation—both positive and negative—made possible by digital data and the computational processing of it, which raises discussions of the historical government, corporate, and colonial systems that created this data regime and began such automation prior to computation. The goal of this nuancing isn’t to disprove their theories, and in an ideal class we’d be able to debate several ideas. Ultimately we’re imagining the future’s perception of the present and there is no right answer.

I’m probably most sympathetic, personally, to this theory: “in the early twentieth century, the growth of personal computing devices paired with the internet enabled people to access information, data, ideas, and opinions on demand.” Though we can point to many historical media that contributed to this particular shift—no technology emerges wholesale into the world—this particular use of computing technology, as an ever-available, externalized hive mind, feels like the kind of significant shift that could still appear significant from a long historical perspective. But my own favorite answer is, really, beside the point.

The Point

The point is that as we talk about contemporary technology through this framework, some of the temporally local dynamics begin to fall away. A group might propose, for example, that social media will be the defining change of our period—and we might in fact imagine that the suddenly and massively widespread ability for so many people to post their thoughts and opinions would be an important-enough shift, in both good and bad ways, that students in 400 years would recognize it as such. But it also becomes clear when we talk about social media in this way that the differences between specific platforms or apps, which can feel so weighty and generationally fraught, probably aren’t very meaningful from the long perspective of history. This isn’t to say that Twitter and Facebook and TikTok are precisely the same, or to conflate their user communities, but it’s worth at least asking whether they extend our senses in fundamentally different ways. From that (admittedly limited) McLuhanian perspective, specific social media platforms become far less important, and “social media” writ large becomes the object of inquiry. Again, the goal is not really to make some grand theoretical claim about social media—nor would I have the expertise to do so—but to nudge students out of presentist technological thinking.

Considered even more broadly, the point of an exercise like this is to prompt the kind of historicized technological imagination I advocated at the beginning of this post. I ask students to imagine the technology of their era, with which they are so familiar, from the perspective of a future historian in order to demonstrate that their technology—which they often talk about as the entirety of “technology”—will itself be historical. Following from that, our current technological ecology is only one step along the historical trajectories we will trace in the class—trajectories which began far before we were born, which will extend far beyond our lifetimes, and which could be shaped by these students toward alternative futures. I should note that students often report that this exercise is unsettling—that they don’t necessary love imagining themselves in the long past tense. I understand their hesitations, though in many ways that unsettling is precisely what I’m after in this exercise.

One of my favorite pithy classroom quips is this one, which I’ve shared on social media before: “not a single person has ever lived through a historical period. Every single person we’re studying lived only in their present.” A printer working with a brand-new steam-driven rotary press in 1821 wasn’t biding their time until Microsoft Word and laser printers were invented; they were astounded by the remarkable, cutting-edge machine before them. That fact becomes easier for students to grapple with, I think, when I ask them to remember that, while they are living in their present, they are in fact living through someone else’s history.