Response: Student-Centered AI & DH Practices Roundtable
The following are my remarks as a respondent to papers by Kathi Inman Berens, Brian Croxall, and Matthew Kollmer for the “Student-Centered AI & DH Practices” roundtable at MLA 2026. Some was written ahead of the panel, and some during the panelists’ talks, so forgive its informality.
In 2026 I find we’re in an interesting place regarding students’ understanding and use of AI tools. This fall I taught a course titled Writing with Robots,” which was an advanced writing course for English majors, focused on the long history and present of automated writing systems. The class started by reading a series of science fiction depictions of AI—as a way to understand the modern myths that shape, often badly, our responses to the products being branded AI in 2026. From there we went back in time to look at automatons, automatic writing devices, writing templates, and a range of other machines and systems from the Renaissance to the late 20th century. And then we spent the final half of the semester digging into how LLMs and related systems work, and reviewing a range of cultural and intellectual responses to the technology.
In that class, we spent much of our time discussing the many technical, intellectual, and moral challenges these systems pose. My students were mostly English majors, and both rhetorically and I believe ethically identified strongly with an “anti-AI” position.
While our class deepened or refined their understanding, they arrived on day generally familiar with and eager to discuss ideas about the dangers of AI:
- stealing the work of artists and thinkers
- hallucinating ideas and sources
- leading to cognitive offloading and learning loss
- damaging the environment
- exacerbating lonliness or mental illness
- and so on
If anything, my challenge was to encourage them to consider less stident positions and to experiment with the tools we explored in the class, as they were well primed to opine on AIs problems. Now, I want to include a caveat here. It’s entirely possible that some of this feedback followed what we might call “the Wikipedia effect.” By which I mean: by 2008 or so, students understood what they thought most teachers wanted to hear them say about Wikipedia, and they were happy to reflect those ideas back even if their practice was more complex than those ideals conveyed. And certainly my students might have been doing something like this: anticipating their English professor’s opinions about AI and trying to reflect those opinions back to him, even if they perhaps misjudged what I wanted to hear.
As the semester went on and we discussed more and more, it became clear that every student—with one notable exception—regularly used of AI tools of various kinds. They would never use it to write a paper for English—of course!—but to generate some code for an assignment in statistics? Well, they weren’t programmers and that seemed perfectly acceptable. Painting with a broad brush, what emerged was that students saw their own use of AI as limited, reasonable, and well-considered, while they often saw others’ uses as excessive, reckless, and thoughtless. For what it is worth, I see some of these same trends echoed in my colleagues, whose comfort with particular applications of AI tracks to their own priorities and values: e.g. it’s terrible for writing but fine for code, &c.
In 2026, students are processing profoundly mixed messages about AI, from instructors banning it with a righteous fervor to universities emailing them about a campus-wide plan that gives them free access with their student accounts. Our challenge as a class was not to ennumerate the many problems of AI, but to help students navigate the far thornier negotiation between utility and damage that technology always brings.
I am wary of inevitability narratives. If a career studing inflection points of historical technological change has taught me anything, it’s that anyone standing in a moment of technological shift claiming to know precisely how things will shake out is selling something. What I find intriguing about the assignments and frameworks our panelists shared this morning is that they are not simple technical training—welp, AI is coming so we might as well teach you to use it—but neither do they seek to shut down engagement. They are all in different ways aimed at helping students more thoughtfully negotiate their engagement with tools that are still unformed and unsolidified, however ubiquitous they might suddenly seem.
In a wonderful panel yesterday on material approaches to teaching in the age of AI, Megan Cook (Colby College) used the term “high friction” to describe the relationship of her assignments to AI. She was not trying to design assignments that entirely foreclosed students using AI—an impossible, or at least Sisypean task—but instead to design assignments that drew on unique, physical, local resources and would thus forclose flippant or lazy uses of AI. The effort a student would have to undertake to us AI in these assignments—collecting and training the model with specific, local information—would require the student to essentially do most of the intellectual work that would have been required to simply complete the assignment.
I see elements of “high friction” in the examples today: not in the sense that they prevent using AI—as most of them explicitly require AI use—but instead in the ways that the assignments’ required engagement forces iterative, sustained, attentive consideration of affordances and limitations. Kathi Inman Berens encourages us to think about both senses of “attention”—its technical meaning in training LLMs, but also our own attention to the ways LLM interfaces mask math as itimacy. Navigating between AI+, AI-, and AI* helps her students weigh and contrast distinct paradigms of engagement with the technology. Brian Croxall seeks to show the “rough beasts” of AI “as they are” by approaching the technology from distinct directions: as a mirror of their own work, when asking it to write a title, or as an object of critique, as they analyze its writing. Asking students to engage AI even through their hatred helps prepare humanities students to respond proactively—whether in adoption or critique—rather than defensively to the world in 2026. By focusing on process pedagogy, Matthew Kollmer reframes LLMs not as technologies that output a final product—putting words on a page—but as parts of a longer, iterative intellectual dialogue. By directly comparing and assessing outputs, Kollmer’s students develop more nuanced understanding of where—if ever—LLMs might assist their writing.
Each assignment exemplifies a kind of “slow engagement” and suggests paths forward in this fluid and paradoxical moment. So I’d like to start our conversation today by asking our panelists to reflect on this idea of “high friction,” and how it might—or might not—help both students and teachers navigate AI in 2026, when—I would argue—simply understanding the perils of the technology is no longer a sufficient endpoint to our conversations.