Universities Rush to Get Ready for Upcoming Academic Year with Caution for ChatGPT
The introduction of ChatGPT in November caused a stir in the academic community as the AI chatbot unexpectedly provided students with a discreet method to expedite their essay and assignment completion. As the new school year approaches, numerous universities are still formulating their reaction to this issue.
Universities around the world spent much of the previous academic year adopting ad hoc approaches to software—or no practices at all. Some professors banned its use, citing outright plagiarism, while others wanted to include it intentionally in their curriculum. This led to inconsistent approaches across classes and departments.
The situation is now changing only slowly: Without clear instructions regarding the different departments, universities are in danger of repeating the free time they experienced in the 2023 final exams. But many are realizing that they have to find a way to live with AI.
“It’s moving so fast,” said Eric Fournier, director of educational development at Washington University in St. Louis. ChatGPT reached 100 million users in less than two months, leaving academic officials in the dark as students embraced the technology. “It went from curiosity to panic to grudging acceptance that these tools are here,” he said.
Professors suspected the students were cheating from the start, said Stetson University student Madison White. “Unless professors fully explored the software, they often immediately assumed it was a hack for students to do reading or homework.”
Generative AI tools, such as ChatGPT, developed by Microsoft Corp.-backed startup OpenAI, feed massive amounts of data and then use that training to answer user queries — often with frightening accuracy. The software is one of the biggest changes in the technology world in decades, bringing a trillion dollar opportunity that will make it even harder for schools to deny or ignore.
But for professors and administrators looking to integrate generative AI into their curricula, the big question is: How? They have to find a middle ground, said Steve Weber, vice provost for undergraduate teaching and learning at Drexel University. Teachers can’t completely ban the use of the tool and neglect to teach it, but they also can’t allow it to be used without restrictions, he said.
“It can be a good tool to use in certain later courses, especially those that prepare students for careers in industry,” Weber said.
A University of Washington professor structured his final exam so that students create ChatGPT responses using a prompt and correct the text in a way that only a person familiar with the subject could do. At the University of Southern California, business professors are experimenting with “TA chatbots” to help answer logistical questions about class curriculum.
Harvard University, on the other hand, uses a duck-themed bot to answer students’ questions about its introductory CS50 course. “CS50 Duck” is designed to explain lines of code and advise students on how to improve their programming. Such tools could work in any kind of university department, said David Malan, a Harvard professor who teaches the CS50 course. So far, however, the integration of AI into classroom work has remained mostly in technical fields.
“I’m sure it will take time for people to decide for themselves how they want to handle these new tools in their classrooms, if not incorporate them,” Malan said.
In some cases, professor-approved artificial intelligence spreads beyond the computer lab. At the University of Pennsylvania’s Wharton School of Business, Ethan Mollick was one of the first educators to add AI politics to his curriculum. The assistant professor expects students to use artificial intelligence and ChatGPT judiciously, being aware of the technology’s limits.
Read more: AI tries to flirt, lie and even mimic you to find your next date
ChatGPT has helped make it clear that many students are just trying to pass courses to get a degree, said Arya Thapar, a rising junior at Chapman University. If left unchecked, it will not foster a love of learning or critical thinking skills.
But university-wide policy has been slow to take shape. Drexel University is still rolling out its guidelines, but they are expected to include the idea that students “don’t use it if it’s not allowed, and if you do, you have to cite the use,” according to Weber. .
At the University of Washington and the University of Southern California, the use of artificial intelligence in the classroom is still under the consideration of professors.
“Technology is moving so fast,” said Peter Cardon, USC professor of business communication, “you really depend on the community to help you make informed decisions.”
But uncertainty can create gray areas for students. If a professor says nothing about using AI in class, is it allowed—or can students be disciplined?
This makes it a threat unlike other classroom technology aids like calculators. “It feels more like a profound change,” the University of Washington’s Fournier said.
“Our goal would be to not think backwards like we did last semester.”
A student at Santa Clara University said that ChatGPT alone improved their grades in economics and was very helpful. The chatbot would produce answers that the student didn’t fully understand but were good enough to get full marks on the problem sets and quizzes.
The student, who asked not to be identified because of ethical issues surrounding ChatGPT, likened the situation to a child going through a divorce: Each parent has different rules, and the guidelines become confusing without a unified approach.
A key step is to educate faculty about what ChatGPT can and cannot actually do, said Ramandeep Randhawa, senior vice dean of the USC Marshall School of Business.
“Our goal would be to not think backwards like we did last semester,” he said. “Everyone is racing against the clock constantly.”