
American education, for whatever reason, is prone to falling fast for fads that we eventually regret. Wall-less “open” classrooms were once all the rage until we realized they were loud and chaotic. “Personalized,” fully online schooling was predicted to revolutionize learning until we realized the academic and interpersonal costs of putting a child in front of a screen all day. Different “learning styles” turned out to be a myth. The vogue of whole–language instruction harmed millions of young readers. The recurring theme is that the education community gets seduced by an innovation’s big promises, and students end up paying the price.
It’s happening again with AI. We’re told of this technology’s ground-breaking nature: It will save students’ time. “Assist” their learning. Prepare them for the jobs of the future. What we’re not told—but what’s being revealed in practice—is that AI is doing students’ basic work for them. To be clear, AI is not just defining an uncommon word or putting a meeting on the calendar. AI is doing a staggering array of tasks that are central to learning. Students are using AI to summarize books and articles the students then won’t read. AI is brainstorming ideas and outlining, drafting, and editing papers. It is producing slide decks and notes for class presentations; it is writing emails to professors and prospective employers. Anyone who cares about education should be heartbroken to read the first-person accounts of students watching their peers simply use chatbots to complete meaningful assignments.
Unless we do something now, the rush to allow AI into the classroom will harm students and schools for decades. We need a moratorium on students’ use of artificial intelligence.
State leaders should require their educational institutions to suspend today’s unconsidered rush into AI use for two years. We need to take a breath and decide what role we want AI to play in our students’ development. Over the last 20 years, the advocacy of tech enthusiasts and our own negligence have led to the spread of all sorts of screens and social media in classrooms and pockets. That was a terrible mistake, but AI poses a greater threat to teaching and learning than iPhones ever did.
It is difficult to overstate the extent of its use or the rapidity of adoption. Two years ago, half of young people (including high school students) were already using generative AI. A year and a half ago, 4 in 10 college students were regularly using AI for their school work—three-quarters had AI generate ideas for their papers, half had AI write sections for them, and almost 1 in 3 had AI write entire essays. At one highly selective college, at least 60 percent of students in the business, public health, and kinesiology schools were using AI to complete assignments; at another, 1 in 4 students used AI instead of completing reading assignments.
It’s worse now. According to a survey by Inside Higher Ed late this summer, 85 percent of college students said they’d used AI for their coursework in the last year—the top uses included brainstorming ideas, summarizing readings, and outlining and editing papers. Another survey this fall found that it is 90 percent, with three-quarters saying their use increased over the last year. Based on what they’ve seen, a majority of teachers are more distrustful that students’ work is their own; nearly three-quarters of professors are concerned about AI’s role in student cheating.
“Teachers require kids to complete thousands of simple arithmetic problems so they begin to recognize patterns and become experts at operations that serve as the foundation of future skills. In other words, so much of learning is in the work, not the output.”
Have no doubt, schooling is being fundamentally altered without a public discussion. In an article in The Atlantic this summer, “College Students Have Already Changed Forever,” professor Ian Bogost argued that AI use is now ubiquitous. “Higher education has been changed forever in the span of a single undergraduate career.” Nevertheless, the education community lacks a sense of urgency. “My recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem.”
The first step is dropping the lazy, defeatist view that AI in schools is here to stay so we should just get on board. No. It is always up to us to decide whether and how to adopt new technologies—whether eugenics, cloning, performance-enhancing drugs, smartphones, or artificial intelligence. We are not powerless in this. We need to have the backbone to say, “Our schools and our students are too important to allow hope, marketing, and inertia to set our course.”
American taxpayers spend billions on primary, secondary, and postsecondary schools each year. State leaders have a role to play. They need to make sure schools have asked all the tough questions and set a reasoned destination before taking one more step down the AI path.
Of course, AI makes finishing school assignments quicker and easier. But reducing effort and the time to completion are not the measures of success in education. Far from it. Great teachers don’t assign Pride and Prejudice or The Odyssey and then give an A to the students who most swiftly learn whether Elizabeth and Mr. Darcy get together or Odysseus makes it home. They don’t assign a 10-page paper on the disagreements between the Federalists and Anti-Federalists and then start a stopwatch.
Reading high-quality fiction builds concentration and memory, improves vocabulary and language use, and enables the mind to practice connecting ideas—things central to success as an adult. Producing an extended paper requires close and careful reading, the organization of thoughts, the development of an argument, and the marshaling of evidence—again, things central to later professional achievement. Teachers require kids to complete thousands of simple arithmetic problems so they begin to recognize patterns and become experts at operations that serve as the foundation of future skills. In other words, so much of learning is in the work, not the output. In an excellent essay about the costs of students’ AI use in The Argument, Derek Thompson aptly applies the athletic-training concept of “time under tension” to learning: Brains, like muscles, need continuous exertion to develop. We cannot forget that a student’s academic development is largely a function of the amount of time and effort she spends on learning—tasks that we are now blithely allowing her to offshore to AI.
The initial evidence is telling us what we should have known. According to an MIT study, students using AI for writing essays have significantly decreased brain activity, struggle to remember what they’ve written, and become reliant on the tool. AI steals their learning. Unsurprisingly, students who use AI to write papers are likelier to say AI is hurting their critical thinking. Seventy percent of teachers agree, saying AI harms students’ critical thinking and research skills. We should be saddened and angered by the first-person accounts of educators realizing how AI is sabotaging student development. And as Americans become more aware of this degradation of learning, we should expect public support for schooling to fall. Indeed, 70 percent of college administrators and faculty are concerned that AI use will reduce the perceived value of a traditional college degree.
My call for a short-term moratorium is a measured response to our impetuous, irresponsible rush into AI use. Bear in mind, others, so alarmed by AI-enabled cheating, are calling for not just bans but also swift, broad “de-teching” of campuses to preserve the integrity of higher education. Historian Niall Ferguson has recommended that campuses immediately create “cloisters,” AI-free spaces for students to do their own reading, writing, and discussing. Schooling has been around for thousands of years, and in that time, humans have accomplished a great deal without AI. A two-year pause isn’t rash. Leaping before looking is.
This forced breather will enable schools to treat this innovation like the FDA treats a promising new medication: We hope it will do great things, but it could do harm, and we’ve been around long enough to know that its advocates will swear up and down that it’s a miracle drug. We need to fully understand its risks, not just its promise. Even if it’s good on balance, we need to be aware of its side effects. With something so uncertain, we need to be on belt-and-suspenders behavior. We’re not saying, “No.” We’re just saying, “Not yet.”
The moratorium should apply to all public schools, from K-12 through colleges and universities, with exemptions allowed for higher-education courses related to the science of AI. (The moratorium is meant to stop students from using AI to do work they need to do themselves, not to stop students from studying how AI is developed, functions, and can be applied in professional settings.) The pause should be paired with a requirement that those governing state-funded schools develop policies related to student use of AI. According to Education Week, only Ohio has taken this step and only with regard to K-12 schools. Other states have begun offering guidance, but those documents vary widely and can focus on issues like procurement and data privacy instead of the central matter: forbidding students to have AI think for them.
These policies should cover the waterfront, e.g., the age at which students can use AI, the types of uses permitted and prohibited, the discretion retained by individual educators, tools for monitoring AI use, and plans to assess how AI use is affecting student learning. In the meantime, schools will need to prohibit most AI use in the classroom and for at-home assignments. The challenge of monitoring and detecting AI use means that, in the short term, schools will probably also need to focus more on in-class assignments and paper/pen-based work. Then, with the time and space to reconsider our current, imprudent course, educators will be able to study and establish AI-use guardrails to protect students and schools.
We must acknowledge the primary reason AI use has been allowed to swell: Good people are afraid of being seen as Luddites. No one wants to come across as opposed to scientific progress, so most keep their heads down and go along. But strong societies have not been afraid to suspend or even roll back technological advancements after realizing their costs. Chemical weapons were once considered a military advancement, but in time nations wisely agreed to ban their use. Atomic weaponry was seen as a scientific marvel, but the international community banned its proliferation. Nations scaled back once-heralded nuclear power plants after major accidents. For a time, some doctors considered cigarettes healthy and even took part in advertisements. Some considered eugenics a brilliant scientific advancement; thankfully that was put to a halt. Human cloning and recombinant DNA, thought to be inspired science in some circles, have been regulated. Countless drugs and chemicals were celebrated as scientific wonders until we understood the costs and issued tight controls: CFCs, cocaine, DDT, LSD, nicotine, OxyContin, thalidomide. “Gain of function” and “dual–use” research have been thought profound, but, because of their risks, they are now tightly controlled. Performance-enhancing drugs were long used and celebrated as supporting athletic advancement, but we now ban or tightly control them.
If these examples are not enough, simply consider how much better off we would be today had we instituted, say 10 years ago, a two-year moratorium to study and reconsider adolescents’ use of social media and handheld devices. We likely would have forestalled a host of problems related to mental health, friendships, attention spans, social development, reading, and more. I highly doubt we would have decided that it was a good idea for young people to spend five hours per day on social media or eight hours per day on screens.
But we are doing the same thing with AI in schools that we did with social media—just letting it happen. Like so many others, I’m hopeful about what AI might help us do in medicine, physics, archeology, and more. But I’m deeply concerned about what it is doing to our schools. We can remain optimistic about the potential of AI in the professional world, while demonstrating caution when it comes to our students. We shouldn’t ignore the risks and sit on our hands.
Let’s just pause and think about it before we don’t do something we regret.
















