(Hua Hsu’s post from the NEW YORKER MAGAZZINE on 30 June 2025.)
What Happens After A.I. Destroys College Writing? The demise of the English paper will end a long
intellectual tradition, but it’s also an opportunity to reëxamine the purpose
of higher education. There are no reliable figures for how many students use
A.I., just stories about how everyone is doing it.
On a blustery spring Thursday, just after midterms,
I went out for noodles with Alex and Eugene, two undergraduates at New York
University, to talk about how they use artificial intelligence in their
schoolwork. When I first met Alex, last year, he was interested in a career in
the arts, and he devoted a lot of his free time to photo shoots with his
friends.
But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.
Alex has wavy hair and speaks with the chill,
singsong cadence of someone who has spent a lot of time in the Bay Area. He and
Eugene scanned the menu, and Alex said that they should get clear broth, rather
than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d
messaged Alex, he had said that everyone he knew used ChatGPT in some fashion,
but that he used it only for organizing his notes.
In person, he admitted that this wasn’t remotely
accurate. “Any type of writing in life, I use A.I.,” he said. He relied on
Claude for research, DeepSeek for reasoning and explanation, and Gemini for
image generation. ChatGPT served more general needs. “I need A.I. to text
girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he
had used A.I. when setting up our meeting. He laughed, and then replied,
“Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”
OpenAI released ChatGPT on November 30, 2022. Six
days later, Sam Altman, the C.E.O., announced that it had reached a million
users. Large language models like ChatGPT don’t “think” in the human sense—when
you ask ChatGPT a question, it draws from the data sets it has been trained on
and builds an answer based on predictable word patterns.
Companies had experimented with A.I.-driven
chatbots for years, but most sputtered upon release; Microsoft’s 2016
experiment with a bot named Tay was shut down after sixteen hours because it
began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed
different. It could hold a conversation and break complex ideas down into
easy-to-follow steps. Within a month, Google’s management, fearful that A.I.
would have an impact on its search-engine business, declared a “code red.”
Among educators, an even greater panic arose. It
was too deep into the school term to implement a coherent policy for what
seemed like a homework killer: in seconds, ChatGPT could collect and summarize
research and draft a full essay. Many large campuses tried to regulate ChatGPT
and its eventual competitors, mostly in vain.
I asked Alex to show me an example of an
A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app
to help with computations for his business classes, but he had never gotten the
hang of using it for writing. “I got you,” Alex told him. (All the students I
spoke with are identified by pseudonyms.)
He opened Claude on his laptop. I noticed a chat
that mentioned abolition. “We had to read Robert Wedderburn for a class,” he
explained, referring to the nineteenth-century Jamaican abolitionist. “But,
obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary,
but it was too long for him to read in the ten minutes he had before class
started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then
transcribed Claude’s points in his notebook, since his professor ran a
screen-free classroom.
Alex searched until he found a paper for an
art-history class, about a museum exhibition. He had gone to the show, taken
photographs of the images and the accompanying wall text, and then uploaded
them to Claude, asking it to generate a paper according to the professor’s
instructions. “I’m trying to do the least work possible, because this is a
class I’m not hella fucking with,” he said.
After skimming the essay, he felt that the A.I.
hadn’t sufficiently addressed the professor’s questions, so he refined the
prompt and told it to try again. In the end, Alex’s submission received the
equivalent of an A-minus. He said that he had a basic grasp of the paper’s
argument, but that if the professor had asked him for specifics he’d have been
“so fucked.” I read the paper over Alex’s shoulder; it was a solid imitation of
how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t
have made much of its generic tone, or of the precise, box-ticking quality of
its critical observations.
Eugene, serious and somewhat solemn, had been
listening with bemusement. “I would not cut and paste like he did, because I’m
a lot more paranoid,” he said. He’s a couple of years younger than Alex and was
in high school when ChatGPT was released. At the time, he experimented with
A.I. for essays but noticed that it made easily noticed errors. “This passed
the A.I. detector?” he asked Alex.
When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions. But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine.
Alex said that his art-history professor was “hella
old,” and therefore probably didn’t know about such programs. We fed the paper
into a few different A.I.-detection websites. One said there was a
twenty-eight-per-cent chance that the paper was A.I.-generated; another put the
odds at sixty-one per cent. “That’s better than I expected,” Eugene said.
I asked if he thought what his friend had done was
cheating, and Alex interrupted: “Of course. Are you fucking kidding me?”
As we looked at Alex’s laptop, I noticed that he
had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He
had concluded that ChatGPT made for the best confidant. He consulted it as one
might a therapist, asking for tips on dating and on how to stay motivated
during dark times. His ChatGPT sidebar was an index of the highs and lows of
being a young person.
He admitted to me and Eugene that he’d used ChatGPT
to draft his application to N.Y.U.—our lunch might never have happened had it
not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he
said.
“It’s cheating, but I don’t think it’s, like,
cheating,” Eugene said. He saw Alex’s art-history essay as a victimless crime.
He was just fulfilling requirements, not training to become a literary scholar.
Alex had to rush off to his study session. I told
Eugene that our conversation had made me wonder about my function as a
professor. He asked if I taught English, and I nodded. “Mm, O.K.,” he said, and
laughed. “So you’re, like, majorly affected.”
I teach at a small liberal-arts college, and I
often joke that a student is more likely to hand in a big paper a year late (as
recently happened) than to take a dishonorable shortcut. My classes are small
and intimate, driven by processes and pedagogical modes, like letting awkward
silences linger, that are difficult to scale.
As a result, I have always had a vague sense that
my students are learning something, even when it is hard to quantify. In the
past, if I was worried that a paper had been plagiarized, I would enter a few
phrases from it into a search engine and call it due diligence.
But I recently began noticing that some students’
writing seemed out of synch with how they expressed themselves in the
classroom. One essay felt stitched together from two minds—half of it was
polished and rote, the other intimate and unfiltered. Having never articulated
a policy for A.I., I took the easy way out. The student had had enough shame to
write half of the essay, and I focussed my feedback on improving that part.
It’s easy to get hung up on stories of academic
dishonesty. Late last year, in a survey of college and university leaders,
fifty-nine per cent reported an increase in cheating, a figure that feels
conservative when you talk to students. A.I. has returned us to the question of
what the point of higher education is. Until we’re eighteen, we go to school
because we have to, studying the Second World War and reducing fractions while
undergoing a process of socialization. We’re essentially learning how to follow
rules.
College, however, is a choice, and it has always
involved the tacit agreement that students will fulfill a set of tasks,
sometimes pertaining to subjects they find pointless or impractical, and then
receive some kind of credential. But even for the most mercenary of students,
the pursuit of a grade or a diploma has come with an ancillary benefit. You’re
being taught how to do something difficult, and maybe, along the way, you come
to appreciate the process of learning. But the arrival of A.I. means that you
can now bypass the process, and the difficulty, altogether.
There are no reliable figures for how many American
students use A.I., just stories about how everyone is doing it. A 2024 Pew
Research Center survey of students between the ages of thirteen and seventeen
suggests that a quarter of teens currently use ChatGPT for schoolwork, double
the figure from 2023. OpenAI recently released a report claiming that one in
three college students uses its products.
There’s good reason to believe that these are low
estimates. If you grew up Googling everything or using Grammarly to give your
prose a professional gloss, it isn’t far-fetched to regard A.I. as just another
productivity tool. “I see it as no different from Google,” Eugene said. “I use
it for the same kind of purpose.”
Being a student is about testing boundaries and
staying one step ahead of the rules. While administrators and educators have
been debating new definitions for cheating and discussing the mechanics of
surveillance, students have been embracing the possibilities of A.I.
A few months after the release of ChatGPT, a
Harvard undergraduate got approval to conduct an experiment in which it wrote
papers that had been assigned in seven courses. The A.I. skated by with a 3.57
G.P.A., a little below the school’s average. Upstart companies introduced
products that specialized in “humanizing” A.I.-generated writing, and TikTok
influencers began coaching their audiences on how to avoid detection.
Unable to keep pace, academic administrations
largely stopped trying to control students’ use of artificial intelligence and
adopted an attitude of hopeful resignation, encouraging teachers to explore the
practical, pedagogical applications of A.I. In certain fields, this wasn’t a
huge stretch.
Studies show that A.I. is particularly effective in
helping non-native speakers acclimate to college-level writing in English. In
some STEM classes, using generative A.I. as a tool is acceptable. Alex and
Eugene told me that their accounting professor encouraged them to take
advantage of free offers on new A.I. products available only to undergraduates,
as companies competed for student loyalty throughout the spring.
In May, OpenAI announced ChatGPT Edu, a product
specifically marketed for educational use, after schools including Oxford
University, Arizona State University, and the University of Pennsylvania’s
Wharton School of Business experimented with incorporating A.I. into their
curricula. This month, the company detailed plans to integrate ChatGPT into
every dimension of campus life, with students receiving “personalized” A.I.
accounts to accompany them throughout their years in college.
But for English departments, and for college
writing in general, the arrival of A.I. has been more vexed. Why bother
teaching writing now? The future of the midterm essay may be a quaint worry
compared with larger questions about the ramifications of artificial
intelligence, such as its effect on the environment, or the automation of jobs.
And yet has there ever been a time in human history
when writing was so important to the average person? E-mails, texts,
social-media posts, angry missives in comments sections, customer-service
chats—let alone one’s actual work. The way we write shapes our thinking. We
process the world through the composition of text dozens of times a day, in
what the literary scholar Deborah Brandt calls our era of “mass writing.” It’s
possible that the ability to write original and interesting sentences will
become only more important in a future where everyone has access to the same
A.I. assistants.
Corey Robin, a writer and a professor of political
science at Brooklyn College, read the early stories about ChatGPT with
skepticism. Then his daughter, a sophomore in high school at the time, used it
to produce an essay that was about as good as those his undergraduates wrote
after a semester of work. He decided to stop assigning take-home essays. For
the first time in his thirty years of teaching, he administered in-class exams.
Robin told me he finds many of the steps that
universities have taken to combat A.I. essays to be “hand-holding that’s not
leading people anywhere.” He has become a believer in the
passage-identification blue-book exam, in which students name and contextualize
excerpts of what they’ve read for class. “Know the text and write about it
intelligently,” he said. “That was a way of honoring their autonomy without
being a cop.”
His daughter, who is now a senior, complains that
her teachers rarely assign full books. And Robin has noticed that college
students are more comfortable with excerpts than with entire articles, and
prefer short stories to novels. “I don’t get the sense they have the kind of
literary or cultural mastery that used to be the assumption upon which we
assigned papers,” he said.
One study, published last year, found that
fifty-eight per cent of students at two Midwestern universities had so much
trouble interpreting the opening paragraphs of “Bleak House,” by Charles
Dickens, that “they would not be able to read the novel on their own.” And
these were English majors.
The return to pen and paper has been a common
response to A.I. among professors, with sales of blue books rising
significantly at certain universities in the past two years. Siva
Vaidhyanathan, a professor of media studies at the University of Virginia, grew
dispirited after some students submitted what he suspected was A.I.-generated
work for an assignment on how the school’s honor code should view
A.I.-generated work. He, too, has decided to return to blue books, and is
pondering the logistics of oral exams. “Maybe we go all the way back to 450
B.C.,” he told me.
But other professors have renewed their emphasis on
getting students to see the value of process. Dan Melzer, the director of the
first-year composition program at the University of California, Davis, recalled
that “everyone was in a panic” when ChatGPT first hit. Melzer’s job is to think
about how writing functions across the curriculum so that all students, from
prospective scientists to future lawyers, get a chance to hone their prose.
Consequently, he has an accommodating view of how
norms around communication have changed, especially in the internet age. He was
sympathetic to kids who viewed some of their assignments as dull and mechanical
and turned to ChatGPT to expedite the process. He called the five-paragraph
essay—the classic “hamburger” structure, consisting of an introduction, three
supporting body paragraphs, and a conclusion—“outdated,” having descended from
élitist traditions.
Melzer believes that some students loathe writing
because of how it’s been taught, particularly in the past twenty-five years.
The No Child Left Behind Act, from 2002, instituted standards-based reforms
across all public schools, resulting in generations of students being taught to
write according to rigid testing rubrics. As one teacher wrote in the
Washington Post in 2013, students excelled when they mastered a form of “bad
writing.” Melzer has designed workshops that treat writing as a deliberative, iterative
process involving drafting, feedback (from peers and also from ChatGPT), and
revision.
“If you assign a generic essay topic and don’t
engage in any process, and you just collect it a month later, it’s almost like
you’re creating an environment tailored to crime,” he said. “You’re encouraging
crime in your community!”
I found Melzer’s pedagogical approach inspiring; I
instantly felt bad for routinely breaking my class into small groups so that
they could “workshop” their essays, as though the meaning of this verb were
intuitively clear. But, as a student, I’d have found Melzer’s focus on process
tedious—it requires a measure of faith that all the work will pay off in the
end. Writing is hard, regardless of whether it’s a five-paragraph essay or a
haiku, and it’s natural, especially when you’re a college student, to want to
avoid hard work—this is why classes like Melzer’s are compulsory. “You can
imagine that students really want to be there,” he joked.
College is all about opportunity costs. One way of
viewing A.I. is as an intervention in how people choose to spend their time. In
the early nineteen-sixties, college students spent an estimated twenty-four
hours a week on schoolwork. Today, that figure is about fifteen, a sign, to
critics of contemporary higher education, that young people are beneficiaries
of grade inflation—in a survey conducted by the Harvard Crimson, nearly eighty
per cent of the class of 2024 reported a G.P.A. of 3.7 or higher—and lack the
diligence of their forebears.
I don’t know how many hours I spent on schoolwork
in the late nineties, when I was in college, but I recall feeling that there
was never enough time. I suspect that, even if today’s students spend less time
studying, they don’t feel significantly less stressed. It’s the nature of
campus life that everyone assimilates into a culture of busyness, and a lot of
that anxiety has been shifted to extracurricular or pre-professional pursuits.
A dean at Harvard remarked that students feel compelled to find distinction
outside the classroom because they are largely indistinguishable within it.
Eddie, a sociology major at Long Beach State, is
older than most of his classmates. He graduated high school in 2010, and worked
full time while attending a community college. “I’ve gone through a lot to be
at school,” he told me. “I want to learn as much as I can.” ChatGPT, which his
therapist recommended to him, was ubiquitous at Long Beach even before the
California State University system, which Long Beach is a part of, announced a
partnership with OpenAI, giving its four hundred and sixty thousand students
access to ChatGPT Edu. “I was a little suspicious of how convenient it was,”
Eddie said. “It seemed to know a lot, in a way that seemed so human.”
He told me that he used A.I. “as a brainstorm” but
never for writing itself. “I limit myself, for sure.” Eddie works for Los
Angeles County, and he was talking to me during a break. He admitted that, when
he was pressed for time, he would sometimes use ChatGPT for quizzes. “I don’t
know if I’m telling myself a lie,” he said. “I’ve given myself opportunities to
do things ethically, but if I’m rushing to work I don’t feel bad about that,”
particularly for courses outside his major.
I recognized Eddie’s conflict. I’ve used ChatGPT a
handful of times, and on one occasion it accomplished a scheduling task so
quickly that I began to understand the intoxication of hyper-efficiency. I’ve
felt the need to stop myself from indulging in idle queries. Almost all the
students I interviewed in the past few months described the same trajectory:
from using A.I. to assist with organizing their thoughts to off-loading their
thinking altogether.
For some, it became something akin to social media,
constantly open in the corner of the screen, a portal for distraction. This
wasn’t like paying someone to write a paper for you—there was no social
friction, no aura of illicit activity. Nor did it feel like sharing notes, or
like passing off what you’d read in CliffsNotes or SparkNotes as your own
analysis. There was no real time to reflect on questions of originality or
honesty—the student basically became a project manager.
And for students who use it the way Eddie did, as a
kind of sounding board, there’s no clear threshold where the work ceases to be
an original piece of thinking. In April, Anthropic, the company behind Claude,
released a report drawn from a million anonymized student conversations with
its chatbots. It suggested that more than half of user interactions could be
classified as “collaborative,” involving a dialogue between student and A.I.
(Presumably, the rest of the interactions were more extractive.)
May, a sophomore at Georgetown, was initially
resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said.
“I just thought I could do the assignment better, and it wasn’t worth the time
being saved.” But she began using it to proofread her essays, and then to
generate cover letters, and now she uses it for “pretty much all” her classes.
“I don’t think it’s made me a worse writer,” she said.
“It’s perhaps made me a less patient writer. I used
to spend hours writing essays, nitpicking over my wording, really thinking
about how to phrase things.” College had made her reflect on her experience at
an extremely competitive high school, where she had received top grades but
retained very little knowledge. As a result, she was the rare student who found
college somewhat relaxed. ChatGPT helped her breeze through busywork and deepen
her engagement with the courses she felt passionate about. “I was trying to
think, Where’s all this time going?” she said. I had never envied a college
student until she told me the answer: “I sleep more now.”
Harry Stecopoulos oversees the University of Iowa’s
English department, which has more than eight hundred majors. On the first day
of his introductory course, he asks students to write by hand a
two-hundred-word analysis of the opening paragraph of Ralph Ellison’s
“Invisible Man.” There are always a few grumbles, and students have
occasionally walked out. “I like the exercise as a tone-setter, because it
stresses their writing,” he told me.
The return of blue-book exams might disadvantage
students who were encouraged to master typing at a young age. Once you’ve grown
accustomed to the smooth rhythms of typing, reverting to a pen and paper can
feel stifling. But neuroscientists have found that the “embodied experience” of
writing by hand taps into parts of the brain that typing does not.
Being able to write one way—even if it’s more
efficient—doesn’t make the other way obsolete. There’s something lofty about
Stecopoulos’s opening-day exercise. But there’s another reason for it: the
handwritten paragraph also begins a paper trail, attesting to voice and style,
that a teaching assistant can consult if a suspicious paper is submitted.
Kevin, a third-year student at Syracuse University,
recalled that, on the first day of a class, the professor had asked everyone to
compose some thoughts by hand. “That brought a smile to my face,” Kevin said.
“The other kids are scratching their necks and sweating, and I’m, like, This is
kind of nice.”
Kevin had worked as a teaching assistant for a
mandatory course that first-year students take to acclimate to campus life.
Writing assignments involved basic questions about students’ backgrounds, he
told me, but they often used A.I. anyway. “I was very disturbed,” he said. He
occasionally uses A.I. to help with translations for his advanced Arabic
course, but he’s come to look down on those who rely heavily on it. “They
almost forget that they have the ability to think,” he said. Like many former
holdouts, Kevin felt that his judicious use of A.I. was more defensible than
his peers’ use of it.
As ChatGPT begins to sound more human, will we
reconsider what it means to sound like ourselves? Kevin and some of his friends
pride themselves on having an ear attuned to A.I.-generated text. The
hallmarks, he said, include a preponderance of em dashes and a voice that feels
blandly objective. An acquaintance had run an essay that she had written
herself through a detector, because she worried that she was starting to phrase
things like ChatGPT did. He read her essay: “I realized, like, It does kind of
sound like ChatGPT. It was freaking me out a little bit.”
A particularly disarming aspect of ChatGPT is that,
if you point out a mistake, it communicates in the backpedalling tone of a
contrite student. (“Apologies for the earlier confusion. . . .”) Its mistakes
are often referred to as hallucinations, a description that seems to
anthropomorphize A.I., conjuring a vision of a sleep-deprived assistant.
Some professors told me that they had students
fact-check ChatGPT’s work, as a way of discussing the importance of original
research and of showing the machine’s fallibility. Hallucination rates have
grown worse for most A.I.s, with no single reason for the increase. As a
researcher told the Times, “We still don’t know how these models work exactly.”
But many students claim to be unbothered by A.I.’s
mistakes. They appear nonchalant about the question of achievement, and even
dissociated from their work, since it is only notionally theirs. Joseph, a
Division I athlete at a Big Ten school, told me that he saw no issue with using
ChatGPT for his classes, but he did make one exception: he wanted to experience
his African-literature course “authentically,” because it involved his
heritage.
Alex, the N.Y.U. student, said that if one of his
A.I. papers received a subpar grade his disappointment would be focussed on the
fact that he’d spent twenty dollars on his subscription. August, a sophomore at
Columbia studying computer science, told me about a class where she was
required to compose a short lecture on a topic of her choosing.
“It was a class where everyone was guaranteed an A,
so I just put it in and I maybe edited like two words and submitted it,” she
said. Her professor identified her essay as exemplary work, and she was asked
to read from it to a class of two hundred students. “I was a little nervous,”
she said. But then she realized, “If they don’t like it, it wasn’t me who wrote
it, you know?”
Kevin, by contrast, desired a more general kind of
moral distinction. I asked if he would be bothered to receive a lower grade on
an essay than a classmate who’d used ChatGPT. “Part of me is able to
compartmentalize and not be pissed about it,” he said. “I developed myself as a
human. I can have a superiority complex about it. I learned more.” He smiled.
But then he continued, “Part of me can also be, like, This is so unfair. I
would have loved to hang out with my friends more. What did I gain? I made my
life harder for all that time.”
In my conversations, just as college students
invariably thought of ChatGPT as merely another tool, people older than forty
focussed on its effects, drawing a comparison to G.P.S. and the erosion of our
relationship to space.
The London cabdrivers rigorously trained in “the
knowledge” famously developed abnormally large posterior hippocampi, the part
of the brain crucial for long-term memory and spatial awareness. And yet, in
the end, most people would probably rather have swifter travel than sharper
memories. What is worth preserving, and what do we feel comfortable off-loading
in the name of efficiency?
What if we take seriously the idea that A.I.
assistance can accelerate learning—that students today are arriving at their
destinations faster? In 2023, researchers at Harvard introduced a self-paced
A.I. tutor in a popular physics course. Students who used the A.I. tutor
reported higher levels of engagement and motivation and did better on a test
than those who were learning from a professor.
May, the Georgetown student, told me that she often
has ChatGPT produce extra practice questions when she’s studying for a test.
Could A.I. be here not to destroy education but to revolutionize it? Barry Lam
teaches in the philosophy department at the University of California,
Riverside, and hosts a popular podcast, Hi-Phi Nation, which applies
philosophical modes of inquiry to everyday topics.
He began wondering what it would mean for A.I. to
actually be a productivity tool. He spoke to me from the podcast studio he
built in his shed. “Now students are able to generate in thirty seconds what
used to take me a week,” he said. He compared education to carpentry, one of
his many hobbies. Could you skip to using power tools without learning how to
saw by hand?
If students were learning things faster, then it
stood to reason that Lam could assign them “something very hard.” He wanted to
test this theory, so for final exams he gave his undergraduates a Ph.D.-level
question involving denotative language and the German logician Gottlob Frege
which was, frankly, beyond me. “They fucking failed it miserably,” he said. He
adjusted his grading curve accordingly.
Lam doesn’t find the use of A.I. morally
indefensible. “It’s not plagiarism in the cut-and-paste sense,” he argued,
because there’s technically no original version. Rather, he finds it a
potential waste of everyone’s time. At the start of the semester, he has told
students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then
I will grade all your work by ChatGPT and we can all go to the beach.”
Nobody gets into teaching because he loves grading
papers. I talked to one professor who rhapsodized about how much more his
students were learning now that he’d replaced essays with short exams. I asked
if he missed marking up essays. He laughed and said, “No comment.” An
undergraduate at Northeastern University recently accused a professor of using
A.I. to create course materials; she filed a formal complaint with the school,
requesting a refund for some of her tuition.
The dustup laid bare the tension between why many
people go to college and why professors teach. Students are raised to
understand achievement as something discrete and measurable, but when they
arrive at college there are people like me, imploring them to wrestle with
difficulty and abstraction. Worse yet, they are told that grades don’t matter
as much as they did when they were trying to get into college—only, by this
point, students are wired to find the most efficient path possible to good
marks.
As the craft of writing is degraded by A.I.,
original writing has become a valuable resource for training language models.
Earlier this year, a company called Catalyst Research Alliance advertised
“academic speech data and student papers” from two research studies run in the
late nineties and mid-two-thousands at the University of Michigan.
The school asked the company to halt its work—the
data was available for free to academics anyway—and a university spokesperson
said that student data “was not and has never been for sale.” But the situation
did lead many people to wonder whether institutions would begin viewing
original student work as a potential revenue stream.
According to a recent study from the Organisation
for Economic Co-operation and Development, human intellect has declined since
2012. An assessment of tens of thousands of adults in nearly thirty countries
showed an over-all decade-long drop in test scores for math and for reading
comprehension.
Andreas Schleicher, the director for education and
skills at the O.E.C.D., hypothesized that the way we consume information
today—often through short social-media posts—has something to do with the
decline in literacy. (One of Europe’s top performers in the assessment was
Estonia, which recently announced that it will bring A.I. to some high-school
students in the next few years, sidelining written essays and rote homework
exercises in favor of self-directed learning and oral exams.)
Lam, the philosophy professor, used to be a
colleague of mine, and for a brief time we were also neighbors. I’d
occasionally look out the window and see him building a fence, or gardening.
He’s an avid amateur cook, guitarist, and carpenter, and he remains convinced
that there is value to learning how to do things the annoying, old-fashioned,
and—as he puts it—“artisanal” way.
He told me that his wife, Shanna Andrawis, who has
been a high-school teacher since 2008, frequently disagreed with his cavalier
methods for dealing with large learning models. Andrawis argues that dishonesty
has always been an issue. “We are trying to mass educate,” she said, meaning
there’s less room to be precious about the pedagogical process. “I don’t have
conversations with students about ‘artisanal’ writing. But I have conversations
with them about our relationship. Respect me enough to give me your authentic
voice, even if you don’t think it’s that great. It’s O.K. I want to meet you
where you’re at.”
Ultimately, Andrawis was less fearful of ChatGPT
than of the broader conditions of being young these days. Her students have
grown increasingly introverted, staring at their phones with little desire to
“practice getting over that awkwardness” that defines teen life, as she put it.
A.I. might contribute to this deterioration, but it isn’t solely to blame. It’s
“a little cherry on top of an already really bad ice-cream sundae,” she said.
When the school year began, my feelings about
ChatGPT were somewhere between disappointment and disdain, focussed mainly on
students. But, as the weeks went by, my sense of what should be done and who
was at fault grew hazier. Eliminating core requirements, rethinking G.P.A.,
teaching A.I. skepticism—none of the potential fixes could turn back the
preconditions of American youth.
Professors can reconceive of the classroom, but
there is only so much we control. I lacked faith that educational institutions
would ever regard new technologies as anything but inevitable. Colleges and
universities, many of which had tried to curb A.I. use just a few semesters
ago, rushed to partner with companies like OpenAI and Anthropic, deeming a
product that didn’t exist four years ago essential to the future of school.
Except for a year spent bumming around my home
town, I’ve basically been on a campus for the past thirty years. Students these
days view college as consumers, in ways that never would have occurred to me
when I was their age. They’ve grown up at a time when society values high-speed
takes, not the slow deliberation of critical thinking.
Although I’ve empathized with my students’ various
mini-dramas, I rarely project myself into their lives. I notice them noticing
one another, and I let the mysteries of their lives go. Their pressures are so
different from the ones I felt as a student. Although I envy their metabolisms,
I would not wish for their sense of horizons.
Education, particularly in the humanities, rests on
a belief that, alongside the practical things students might retain, some
arcane idea mentioned in passing might take root in their mind, blossoming
years in the future. A.I. allows any of us to feel like an expert, but it is
risk, doubt, and failure that make us human. I often tell my students that this
is the last time in their lives that someone will have to read something they
write, so they might as well tell me what they actually think.
Despite all the current hysteria around students
cheating, they aren’t the ones to blame. They did not lobby for the
introduction of laptops when they were in elementary school, and it’s not their
fault that they had to go to school on Zoom during the pandemic. They didn’t
create the A.I. tools, nor were they at the forefront of hyping technological
innovation. They were just early adopters, trying to outwit the system at a
time when doing so has never been so easy. And they have no more control than
the rest of us.
Perhaps they sense this powerlessness even more
acutely than I do. One moment, they are being told to learn to code; the next,
it turns out employers are looking for the kind of “soft skills” one might
learn as an English or a philosophy major. In February, a labor report from the
Federal Reserve Bank of New York reported that computer-science majors had a
higher unemployment rate than ethnic-studies majors did—the result, some
believed, of A.I. automating entry-level coding jobs.
None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.
When classes were over and students were moving into their summer housing, I e-mailed with Alex, who was settling in in the East Village. He’d just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.” He received an A-minus and a B-plus.