
If you are a professor of humanities, as I am, then widespread use of chatbots by your students is either the worst thing that’s ever happened to you—or one of the best. What I suggest is scary for non-academics, I know, but bear with me: Let’s try looking at these matters from the teacher’s point of view.
If you are committed to doing what you’ve always done—which you may well be, because what you’ve always done is all you know how to do—then the rise of the chatbots will hurt, and hurt a lot. If you’re the typical humanities professor, what you’ve always done is assign the good old thesis essay (with or without research, depending on the situation): an essay that stakes a claim and then defends that claim against possible objections. In its most classic form, such an essay will have an introductory paragraph that states the thesis, then three major points in which that thesis is developed and defended against potential objections, and then a conclusion. In high school, that’s a five-paragraph essay; in college, the essays are often longer, but they have essentially the same structure. (If you’re not a humanities professor, you’re still probably having some essay memories right now … painful ones, I expect.)
If that’s what you assign, you can be very clear about this: No matter what rules you establish, your students are going to get AI to do these essays for them. It’s exactly the kind of thing the chatbots are really good at, because it’s completely formulaic and mechanical, and there are zillions of examples out there for the LLMs to draw upon.
Your university has likely purchased some software that claims to be able to detect AI use. But all such services occasionally produce false positives, and that has made many universities very wary about using them. It would not be good publicity—nor good marketing—to let it be known that students were denied credit, or perhaps even denied graduation, because a service said that their work was AI-generated when in fact it was not. So if you want to game your students’ system for gaming your system, hard times are a-comin’—unless, like some professors I know, you keep assigning the same things you’ve always assigned while merely telling your students that they’re on their honor not to use AI. (If you can do that and sleep at night, I admire your powers of compartmentalization. But only your powers of compartmentalization.)
One of the favors that chatbots have done for humanities professors is to reveal to us that chatbots are so good at doing the thesis-essay assignment because it has always been an exceptionally formulaic thing. If we engage in a little self-examination, we’ll realize that we like it formulaic, because that reduces the time and mental energy we have to invest in grading. It’s easy to compare any given student’s essay to the template in your mind and quickly see the extent to which it matches or deviates from it. The rise of the chatbots—with their algorithmic pattern-matching, their stochastic parrot behavior—has revealed that students and faculty alike have been, for many decades, functioning in exactly the same way. If we could confront our chatbots the way parents confront their kids about drug use, the bots would surely reply “I learned it by watching you!”
If we’re willing to let the rise of the chatbots force certain questions upon us, this could be not the worst of times, but the best of them. A little reflection would allow us to see the ways that we have for many years misunderstood what we’re all about: We may have thought we wanted our students to be more sensitive readers, more thoughtful interpreters, more rigorous analysts, but what we were really telling our students was that we wanted them to be better writers of thesis essays.
What do such essay assignments achieve? Well, you might say, they show that students have understood the texts assigned to them, that they can read intelligently, interpret with some degree of sophistication, and relay those interpretations in clear prose. Fine. But what if that’s not what the assignments actually do? What if they don’t mark genuine engagement with and response to literature? What if, instead, they simply reward students who internalize the formula and are able to regurgitate it? On some level, we’ve probably all realized that in many cases that’s exactly what happens. The rise of the chatbots gives us an opportunity to admit it. And that’s a pretty good thing.
I should pause here to say that, of course, there are many professors in the humanities who want their students to use AI to do their assignments—who wish to increase their students’ dependence on the big AI companies. To those professors I say: Go in peace, and may our paths never cross.
When I have talked with my fellow professors in the Great Texts program at Baylor’s Honors College, I have learned a few things. Some professors have for many years been giving oral examinations in the old Oxford and Cambridge tutorial style, where students read their papers aloud, and the professor interrupts to ask questions like “What do you mean by that word? What does that phrase mean?” This allows the professor to discover whether the student actually knows what he or she is talking about. In such situations, and in full oral exams, there are few ways to hide your ignorance. Professors who teach this way can largely (if not wholly) ignore the AI freakout.
Other professors have been using this new world as an opportunity to rethink what they’re doing and why. One colleague, for instance, went to Walmart and bought her students a bunch of cheap composition notebooks, handed them out, and asked the students to use them to make commonplace books—that is, choice quotations from wise authors written out in your own hand. I have been bringing into class handouts with a paragraph or two on them, and asking the students to annotate them thoroughly in class. This does take up more classroom time, but I compensate by making short audio lectures that I email to my students. I’ve always given a lot of reading quizzes; now I give more. This is a version of what some people call the flipped classroom, but accelerated by the rise of chatbots.
I’ll be retiring from teaching at the end of this year. It has been wonderful to spend time coming up with alternative assignments—trying, after more than 40 years in the classroom, to think in fresh ways about what I want my students to know and what I want them to be able to do. Properly understood, the disruption of humanities teaching by AI is a gift, and I plan to receive it as such, rather than complain about a burden. As a teacher, I find these new conditions invigorating and refreshing. I feel like Charles Foster Kane when he started his career as a newspaper publisher: I don’t know how to teach masterpieces of literature and philosophy and theology, I just try everything I can think of. I find that my students—even if they’re not always as excited as I am—welcome these experiments and are quite willing to engage in them.
I’m teaching a course on fantasy this semester, and we’re now reading The Lord of the Rings. I asked my students to note the extensive maps printed at the end of that book, which the previous books we’ve read—George MacDonald’s Phantastes, Lord Dunsany’s The King of Elfland’s Daughter, and Lud-in-the-Mist by Hope Mirrlees—do not have. I handed my students some blank sheets of paper and asked them to draw, as best they could, maps of the worlds of those books. They quickly discovered that it was not possible to do this for Phantastes—though it was quite easy, if with some debate about how best to do it, for the other two. Phantastes is unmappable. Which leads to an interesting question: Why? Why did MacDonald write a book set in a world you can’t map? That turns out to be a very important question if you want to understand his peculiar and powerful book.
I don’t think we would have gotten into these issues about the visualization of fictional worlds—why it matters, and what you do instead when you can’t visualize—if I hadn’t been on the lookout for a different kind of assignment.
So for me, the rise of the chatbots has been an unexpected, late-career gift. It has made my teaching more fun for me, and I think more interesting for my students. And I believe the lessons I have learned can be generalized.
As humanities education has become more threatened by budget cuts, an all-consuming university focus on STEM, and self-inflicted unpopularity, it has in a circling-the-wagons way become more and more fad-obsessed and formulaic in its gestures. I remember when, 25 years ago, every English department in America suddenly decided it had to have a “body critic” to talk about “representations of the body” in literature. (Never “bodies,” by the way: the body.) That led to graduate seminars on “Feminism and the Body,” or “The Black Body in the Southern Imagination,” or “The Colonized Body”—which then became undergraduate classes. That’s just one example among many. This trickling-down of concepts from initial critical writings to graduate seminars to undergraduate classes, and then the expectation that undergraduates would be able to (stochastically!) parrot this discourse in their essays, has been how humanities departments function. The boundaries of academic discourse got policed more vigorously as the territory shrank.
The circling of wagons makes sense when we’re confident that the enemy is outside our perimeter, but when the enemy is everywhere, including inside our wagons’ tents and holding the reins of our horses, then some new and imaginative strategies are called for. The current circumstances, properly seized, could prompt a genuine reinvigoration of the humanities, and even of student interest in taking humanities classes. By depriving students of constant AI use—or, to put it more accurately, by allowing them some respite from the tyranny of the chatbots over their lives—we actually enable them to exercise their minds in unfamiliar, and for some unprecedented, ways.
In short, there’s a great opportunity here for those who want to take it. Humanities professors of the world, unite! We have nothing to lose but our self-forged chains.
















