Your Mileage May Vary is an advice column offering you a unique framework for thinking through your moral dilemmas. To submit a question, fill out this anonymous form or email [email protected]. Here’s this week’s question from a reader, condensed and edited for clarity:
I am a university teaching assistant, leading discussion sections for large humanities lecture classes. This also means I grade a lot of student writing — and, inevitably, see a lot of AI writing too.
Of course, many of us are working on developing assignments and pedagogies to make that less tempting. But as a TA, I only have limited ability to implement these policies. And in the meantime, AI-generated writing is so ubiquitous that to take course policy on it seriously, or even to escalate every suspected instance to the professor who runs the course, would be to make dozens of accusations, some of them false positives, for basically every assignment.
I believe in the numinous, ineffable value of a humanities education, but I’m also not going to convince stressed 19-year-olds of that value by cracking down hard on something everyone does. How do I think about the ethics of enforcing the rules of an institution that they don’t take seriously, or letting things slide in the name of building a classroom that feels less like an obstacle to circumvent?
Dear Troubled Teacher,
I know you said you believe in the “ineffable value of a humanities education,” but if we want to actually get clear on your dilemma, that ineffable value must be effed!
So: What is the real value of a humanities education?
Looking at the modern university, one might think the humanities aren’t so different from the STEM fields. Just as the engineering department or the math department justifies its existence by pointing to the products it creates — bridge designs, weather forecasts — humanities departments nowadays justify their existence by noting that their students create products, too: literary interpretations, cultural criticism, short films.
But let’s be real: It’s the neoliberalization of the university that has forced the humanities into that weird contortion. That’s never what they were supposed to be. Their real aim, as the philosopher Megan Fritts writes, is “the formation of human persons.”
In other words, while the purpose of other departments is ultimately to create a product, a humanities education is meant to be different, because the student herself is the product. She is what’s getting created and recreated by the learning process.
Have a question you want me to answer in the next Your Mileage May Vary column?
This vision of education — as a pursuit that’s supposed to be personally transformative — is what Aristotle proposed back in Ancient Greece. He believed the real goal was not to impart knowledge, but to cultivate the virtues: honesty, justice, courage, and all the other character traits that make for a flourishing life.
But because flourishing is devalued in our hypercapitalist society, you find yourself caught between that original vision and today’s product-based, utilitarian vision. And students sense — rightly! — that generative AI proves the utilitarian vision for the humanities is a sham.
As one student said to his professor at New York University, in an effort to justify using AI to do his work for him, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?” It’s a completely logical argument — as long as you accept the utilitarian vision.
The real solution, then, is to be honest about what the humanities are for: You’re in the business of helping students with the cultivation of their character.
I know, I know: Lots of students will say, “I don’t have time to work on cultivating my character! I just need to be able to get a job!”
It’s totally fair for them to be focusing on their job prospects. But your job is to focus on something else — something that will help them flourish in the long run, even if they don’t fully see the value in it now.
Your job is to be their Aristotle.
For the Ancient Greek philosopher, the mother of all virtues was phronesis, or practical wisdom. And I’d argue there’s nothing more useful you can do for your students than help them cultivate this virtue, which is made more, not less, relevant by the advent of AI.
Practical wisdom goes beyond just knowing general rules — “don’t lie,” for example — and applying them mechanically like some sort of moral robot. It’s about knowing how to make good judgments when faced with the complex, dynamic situations life throws at you. Sometimes that’ll actually mean violating a classic rule (in certain cases, you should lie!). If you’ve honed your practical wisdom, you’ll be able to discern the morally salient features of a particular situation and come up with a response that’s well-attuned to that context.
This is exactly the sort of deliberation that students will need to be good at as they step into the wider world. The breakneck pace of technological innovation means they’re going to have to choose, again and again and again, how to make use of emerging technologies — and how not to. The best training they can get now is training in how to wisely make this type of choice.
Unfortunately, that’s exactly what using generative AI in the classroom threatens to short-circuit, because it removes something incredibly valuable: friction.
AI is removing cognitive friction from education. We need to add it back in.
Encountering friction is how we give our cognitive muscles a workout. Taking it out of the picture makes things easier in the short term, but in the long term, it can lead to intellectual deskilling, where our cognitive muscles gradually become weaker for lack of use.
“Practical wisdom is built up by practice just like all the other virtues, so if you don’t have the opportunity to reason and don’t have practice in deliberating about certain things, you won’t be able to deliberate well later,” philosopher of technology Shannon Vallor told me last year. “We need a lot of cognitive exercise in order to develop practical wisdom and retain it. And there is reason to worry about cognitive automation depriving us of the opportunity to build and retain those cognitive muscles.”
So, how do you help your students retain and build their phronesis? You add friction back in, by giving them as many opportunities as possible to practice deliberating and choosing.
If I were designing the curriculum, I wouldn’t do that by adopting a strict “no AI” policy. Instead, I’d be honest with students about the real benefit of the humanities and about why mindless AI cheating would be cheating themselves out of that benefit. Then, I’d offer them two choices when it comes time to write an essay: They can either write it with help from AI, or without. Both are totally fine.
But if they do get help from AI, they have to also write an in-class reflection piece, explaining why they chose to use a chatbot and how they think it changed their thinking and learning process. I’d make it shorter than the original assignment but longer than a paragraph, so it forces them to develop the very reasoning skills they were trying to avoid using.
As a TA, you could suggest this to professors, but they may not go for it. Unfortunately, you’ve got limited agency here (unless you’re willing to risk your job or walk away from it). All you can do in such a situation is exercise the agency you do have. So use every bit of it.
Since you lead discussion sections, you’re well-placed to prompt your students to work their cognitive muscles in conversation. You could even stage a debate about AI: Assign half of them to argue the case for using chatbots to write papers and half of them to argue the opposite.
If a professor insists on a strict “no AI” policy, and you encounter essays that seem clearly AI-written, you may have little choice but to report them. But if there’s room for doubt about a given essay, you might err on the side of leniency if the student has engaged very thoughtfully in the discussion. At least then you know they’ve achieved the most important aim.
None of this is easy. I feel for you and all other educators who are struggling in this confusing environment. In fact, I wouldn’t be surprised if some educators are suffering from moral injury, a psychological condition that arises when you feel you’ve been forced to violate your own values.
But maybe it can comfort you to remember that this is much bigger than you. Generative AI is an existential threat to a humanities education as currently constituted. Over the next few years, humanities departments will have to paradigm-shift or perish. If they want to survive, they’ll need to get brutally honest about their true mission. For now, from your pre-paradigm-shift perch, all you can do is make the choices that are left for you to make.
- This week I went back to Shannon Vallor’s first book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. If there’s one book I could get everyone in the AI world to read, it would be this one. And I think it can be useful to everyone else, too, because we all need to cultivate what Vallor calls the “technomoral virtues” — the traits that will allow us to adapt well to emerging technologies.
- A New Yorker piece in April about AI and cognitive atrophy led me to a 2024 psychology paper titled “The Unpleasantness of Thinking: A Meta-Analytic Review of the Association Between Mental Effort and Negative Affect.” The authors’ conclusion: “We suggest that mental effort is inherently aversive.” Come again? Yes, sometimes I just want to turn off my brain and watch Netflix, but sometimes thinking about a challenging topic is so pleasurable! To me, it feels like running or weight lifting: Too much is exhausting, but the right amount is exhilarating. And what feels like “the right amount” can go up or down depending on how much I practice.
- Astrobiologist Sara Imari Walker recently published an essay in Noema provocatively titled “AI Is Life.” She reminds us that evolution produced us and we produced AI. “It is therefore part of the same ancient lineage of information that emerged with the origin of life,” she writes. “Technology is not artificially replacing life — it is life.” To be clear, she’s not arguing that tech is alive; she’s saying it’s an outgrowth of human life, an extension of our own species.