Society: Cheat GPT

This piece originally appeared in Toronto Life. Read the full text here.

Does ChatGPT Signal the End of University as We Know It?

At the high school Abhinash attended in India, calculators were forbidden. For tests, including the statewide university entrance exams, students wrote out their equations in longhand. They were being evaluated not only for their understanding of math but also for their ability to trudge through the steps of each equation, work that no scientist or engineer in the 21st century would ever need to perform.

In 2019, Abhinash moved to Toronto to pursue a degree in earth and environmental sciences. (Abhinash is a pseudonym. Like other students interviewed for this story, he asked me to withhold his real name because he has done things his professors may consider cheating.) In Toronto, he enrolled in a linear-algebra class, where, to his surprise, calculators were not merely permitted but required. The first time he brought one to an exam, it felt wrong, like showing up to a black-tie gala in jeans and a T-shirt. He placed the device on his desk and willed himself to touch it, instinctively feeling that doing so might violate a sacred rule. He quickly got over this fear. Soon, the very notion of a prohibition on calculators seemed ridiculous.

Abhinash was in the fourth year of his degree when a far more powerful tool hit the market. On November 30, 2022, OpenAI, a Microsoft-funded research lab in San Francisco, made its chatbot, ChatGPT, publicly available for free. In December, Abhinash was hanging out in the common room of his building with friends when one of them introduced the group to the program.

The guys were enthralled. They crowded around their buddy’s laptop and began issuing commands to the bot, instructing it to write poems and song lyrics. Later that evening, two of the friends got into an argument over a group assignment, and one stormed out of the room. When he returned, he learned that his buddies had prompted ChatGPT to write an apology on his behalf—and to generate alternative versions in the style of a rapper, a pirate and a Shakespearean actor. Abhinash was fascinated by the program, although he couldn’t fully grasp its purpose. It seemed more interesting than useful.

He soon learned that he was wrong. In February of 2023, he went with his class on a field trip to High Park. Afterward, the professor gave students a soil sample from the mucky bottom of Grenadier Pond and instructed them to write a paper on the sediment, linking it to events in Toronto history. Abhinash was stumped: the sedimentary record didn’t seem to line up with the historical one. Roughly 25 centimetres from the top of the sample, he saw what seemed to be a layer of black tar, which he dated to the early 1970s. But what on earth could have caused it?

He scoured the university databases in the hopes of uncovering a regional event—a fire, say, or a major construction ­project—that might explain the sedimentary change, but nothing came up. In desperation, he wrote up a description of the soil sample and prompted ChatGPT to interpret it. The bot responded in seconds, linking the tar in the sample to the construction of the Queensway thoroughfare in the 1950s. At first, the solution seemed absurd to Abhinash—the timing made no sense—but he soon realized it was correct. In his original analysis, he’d misdated the soil sample, attributing sediment from the postwar construction boom to events 20 years later. ChatGPT had corrected the mistake.

 
It had saved him time, too. Soon, Abhinash was using it to produce abstracts for his scientific papers, to craft transition sentences and to break him out of writers’ block. When he hit a wall intellectually, he’d paste his half-done work into the bot and instruct it to finish the job. He never tried to pass off AI-generated text as his own. ChatGPT simply came up with ideas; if he liked them, he rewrote them. He was still thinking for himself, but he was enlisting the bot as secretary, sounding board and copy editor.

Was he cheating? Professors everywhere were saying that students using ChatGPT in their schoolwork were guilty of grievous academic misconduct. The logic of their arguments was simple enough. Ever since Yale University popularized the academic grading system in the early 19th century, grades have been the currency around which universities operate. Like any currency, grades have exchange value: they buy scholarships, reference letters from instructors, and placements in competitive graduate schools or professional programs.

By this logic, students who don’t do all the work they submit are basically scam artists, amassing unearned capital and using it to secure benefits they don’t deserve. Universities exist not only to prepare students for the professional world but also to protect the professions themselves by ensuring, or at least trying to ensure, that the most critical jobs go to the most qualified candidates. If they stop being meritocratic—if they stop selecting for the most talented or hard-working students and instead elevate those who are most willing to game the system—society’s bedrock fields, like law, medicine and engineering, could become rife with fraudsters.

When ChatGPT first appeared, instructors and administrators saw the potential for academic grift on a massive scale, an existential threat to the norms of their institutions—and perhaps to us all. But Abhinash wasn’t convinced. His teachers had once said similar things about calculators, and anyway, people always freak out when a new technology hits the market. I teach courses on long-form journalism at the University of Toronto, and over the past year, I’ve witnessed the ChatGPT dilemma up close. Every instructor knows that the technology is a big deal. But should universities fight it with everything they’ve got, or can they somehow live with it? Is it a game ender—or just a game changer?

This is an excerpt. Read the full text here.

Simon Lewsen