Colleges Aim to Prevent Cheating with Paper Exams and Chatbot Restrictions
Philosophy professor Darren Hick recently encountered yet another instance of cheating in his Furman University classroom last semester. In an update shared with his social media followers, he expressed, “And, I have identified my second ChatGPT plagiarist.”
Friends and colleagues responded, some with wide-eyed emojis. Others expressed surprise.
“Only 2?! I’ve caught dozens,” said Timothy Main, a writing professor at Conestoga College in Canada. “We are fully on in crisis mode.”
Practically overnight, ChatGPT and other AI chatbots have become a source of cheating in the university.
Now, teachers are rethinking how they teach courses from Writing 101 to computer science this fall. Educators say they want to embrace the possibilities of technology to teach and learn in new ways. but in student assessment they see the need to test questions and assignments as “ChatGPT protected”.
For some teachers, this means a return to paper tests after years of digital-only tests. Some professors require students to show edit history and drafts to prove their thought process. Other directors are less concerned. Some students have always found ways to cheat, they say, and this is just the latest.
The explosion of AI-powered chatbots launched in November, including ChatGPT, has raised new questions for researchers committed to making sure students not only get the right answer, but also understand how the work is done. According to the trainers, there is consensus on at least some of the most urgent challenges.
— Are AI detectors reliable? Not yet, says Stephanie Laggini Fiore, vice provost at Temple University. This summer, Fiore was part of a Temple team that tested the detector used by Turnitin, a popular plagiarism detection service, and found it to be “incredibly inaccurate.” He said it worked best for confirming human work, but was inconclusive at recognizing chatbot-generated text and least reliable for hybrid work.
— Are students wrongly accused of using artificial intelligence platforms to cheat? Absolutely. In one case last semester, a Texas A&M professor falsely accused the entire class of using ChatGPT for final assignments. Most of the class was later released.
— How can teachers be sure if a student has used an artificial intelligence chatbot dishonestly? It’s nearly impossible unless the student confesses, as both of Hicks’ students did. Unlike old-school plagiarism, where text matches the source from which it was lifted, AI-generated text is unique every time.
In some cases, the cheating is obvious, says author Professor Main, who has had students submit assignments that were clearly cut-and-paste jobs. “I got responses saying, ‘I’m just an AI language model, I don’t have an opinion on it,'” he said.
In his first-year required writing course last semester, Main logged 57 academic honesty issues, a spike in academic dishonesty compared to about eight each of the previous two semesters. Artificial intelligence scams accounted for about half of them.
This fall, Main and his colleagues are revamping the school’s required freshman writing course. The writing assignments are more individualized so that students can write about their own experiences, opinions and perspectives. All assignments and course programs have strict rules that prohibit the use of artificial intelligence.
University administrators have encouraged instructors to make the basic rules clear.
Many educational institutions leave the decision of whether or not to use chatbots in the classroom up to instructors, said Hiroano Okahana, director of the American Council on Education’s Education Futures Lab.
Michigan State University faculty are given “a small library of statements” to choose from and modify their curricula as they see fit, said Bill Hart-Davidson, associate dean of MSU’s College of Arts and Letters, who leads AI workshops for faculty. helps shape new tasks and practices.
“Asking students questions like, ‘Tell me in three sentences what is the Krebs cycle in chemistry?’ doesn’t work anymore because ChatGPT spits out the perfect answer to that question,” said Hart-Davidson, who suggests asking the questions differently. For example, give a description where there are mistakes and ask students to point them out.
Evidence is accumulating that chatbots have changed study habits and the way students search for information.
Chegg Inc., an online homework company that has been cited in numerous scam cases, said in May that its shares had fallen nearly 50% in the first quarter of 2023 due to spikes in student use of ChatGPT, according to Chegg’s CEO. Dan Rosensweig. He said students who normally pay for Chegg’s service were now using the AI platform for free.
At Temple this spring, the use of research tools such as library databases declined significantly after the advent of chatbots, said Joe Lucia, dean of the university’s libraries.
“It seemed that students saw this as a quick way to find information that didn’t require the effort or time it takes to access and work with a dedicated resource,” he said.
Such shortcuts are troubling, in part because chatbots tend to make things up. a disorder known as “hallucination”. The developers say they’re working to make their platforms more reliable, but it’s unclear when or if that will happen. Teachers are also concerned about what students lose when they skip steps.
“There’s going to be a big shift back to paper-based tests,” said Bonnie MacKellar, a computer science professor at St. John’s University in New York. This field already had a “massive plagiarism problem,” with students borrowing computer code from friends or recording it from the Internet, MacKellar said. He worries about entry-level students using artificial intelligence shortcuts, cheating themselves of the skills needed in higher-level classes.
“I hear colleagues in humanities courses say the same thing: it’s back to the blue books,” MacKellar said. In addition to requiring students in his introductory courses to write their code by hand, paper exams will count for a higher percentage of the grade this fall, he said.
Ronan Takizawa, a sophomore at Colorado College, has never heard of a blue book. As a computer science major, it feels like a step backwards to him, but he admits it would force students to learn the material. “Most students are not disciplined enough not to use ChatGPT,” he said. Paper tests “would really force you to understand and learn the concepts.”
Takizawa said students are sometimes confused about when the use of AI is correct and when it is cheating. Using ChatGPT for certain homework assignments, such as summarizing reading, doesn’t appear to be different from YouTube or other sites students have used for years, he said.
Other students say the arrival of ChatGPT has made them paranoid because they are being accused of cheating when they haven’t.
Nathan LeVang, a sophomore at Arizona State University, says he now checks all assignments by running them through an AI scanner.
In one 2,000-word essay, the detector labeled certain paragraphs “22% human-written, mostly AI.”
“I was like, ‘That’s really not true, because I just sat here and wrote it word for word,'” LeVang said. But he rewrote those songs anyway. “If I take 10 minutes after writing an essay to make sure everything is in order, that’s fine. It’s extra work, but I think that’s the reality we live in.”