76 F
Davis

Davis, California

Wednesday, April 17, 2024

Student, faculty discuss the use of AI detectors at universities after reports of false academic dishonesty accusations

These tools have emerged to prevent academic use of ChatGPT, but how accurately can they detect the AI site’s work? 

 

By RACHEL GAUER— campus@theaggie.org

 

On Nov. 30, 2022, OpenAI launched ChatGPT, a chat-based artificial intelligence (AI) tool that uses AI to respond to users’ questions and requests.

Although the platform has gained popularity due to the convenience of its capabilities, which range from answering simple questions to writing full essays, ChatGPT users have since learned of several downfalls of the site, including the service occasionally providing false information and writing produced by the platform being flagged for plagiarism. While universities and other academic settings have long dealt with plagiarism, as ChatGPT has rapidly become more popular, they have had to learn how to detect work authored by the program. Online detectors exist to check whether content was originally generated by AI, but some have publicly disclosed that their platforms are not always accurate. 

One UC Davis undergraduate student was recently accused of using ChatGPT on an assignment. The student’s name has been kept anonymous per their request, as they are currently under investigation by the Office of Student Support and Judicial Affairs (OSSJA).

“My advice to professors would always be [to not] jump to using an AI classifier,” the student said. “First, do your own investigation and do it properly. Maybe try and compare the student’s writing to an example of their previous writing to see if the writing style remains relatively the same.” 

The student also commented on the potential flaws of the AI detector technology that many professors have begun to use. 

“The fact of the matter is that of the AI detectors available right now, none of them are accurate enough,” the student said. “As these Large Language Models become more and more advanced, the text-classifiers to combat them are always going behind them.”

Whitney Gegg-Harrison, an associate professor at the University of Rochester, recently published an article in which she outlined her stance against the use of AI detectors in screening student work, including via a platform called GPTZero. GPTZero is an artificial intelligence detection platform that was created by Edward Tian, who is a current undergraduate student at Princeton University. 
“The main thing I want people to understand about ‘AI-detection’ tools is that false positives are far more frequent than people tend to imagine,” Gegg-Harrison said via email. “I honestly think that professors should try putting some of their own writing into GPTZero, because they’ll almost certainly find some of it flagged as ‘likely AI-generated.’ Experiencing that with your own writing makes the issue of false positives that much more visceral.”

Following the accusation, the UC Davis student said that their sister tested the accuracy of GPTZero. In doing so, she found several ‘false positives,’ or works that were not generated by AI but marked as containing AI-generated content. The student commented on their sister’s research.

“Chat GPTZero detected that the second chapter of the book of Genesis in the Bible was entirely written by an AI,” the student said. “Of the 247 documents [my sister] ran through GPTZero, 40% were falsely detected to have used AI.”

Hunter Keaster, who serves as the case director at the Student Advocate Office for OSSJA, commented on cases involving ChatGPT and the typical process that students face when referred to judicial affairs. 

“While I cannot discuss case specifics, I can say that students involved in suspected plagiarism cases involving ChatGPT retain the same rights that they would have in any other plagiarism case,” Keaster said via email.

Fabienne Blanc, a UC Davis parent whose student was involved in a ChatGPT plagiarism accusation, said that she has concerns about what she described as the “premature use” of AI detectors.  

“Some universities are starting to use AI detectors routinely even though experts are warning that those detectors are unreliable,” Blanc said. “Some universities plan on passing all entrance essays through AI detectors. OpenAI’s own detector has an 8% rate of false positives, so that could mean that thousands of students who honestly wrote their essays could have their applications rejected because of faulty technology.” 

As ChatGPT has emerged in popularity, professors and academic workers have been met with the difficult task of deciding whether to use detection sites that aren’t 100% accurate yet or risk cases of academic dishonesty going unnoticed. 

David Horsley, a professor in the Mechanical and Aerospace Engineering department at UC Davis, currently teaches ENG 190, which concerns professional ethics. Horsley commented on the future of AI, as well as its potential benefits within the classroom. 

“There’s obviously a lot of potential for abuse,” Horsley said. “I do think there are some good use cases, such as creating model essays that can help students learn how to construct such an essay on their own. Frankly, AI isn’t going away, so we’re going to need to figure out how to live with it.” 

Sofia Rhea, a second-year Ph.D. student and teaching assistant (TA) in the communication department, commented on the future of AI detection sites — including Turnitin, which many professors, TAs and others who grade student work currently use to detect traditional plagiarism.  

“Turnitin has […] developed some AI detection that I believe will be launched wide-scale by April 2023,” Rhea said. “Their AI detection can successfully detect AI written work with high accuracy and has a rather low rate of false positives. As technology like ChatGPT continues to get more advanced, so will the technologies used to detect it.”

 

Written by: Rachel Gauer — campus@theaggie.org