51.6 F

Davis, California

Saturday, May 25, 2024

Artificial intelligence is the future, but is it a threat to education?

With the rise of chatbots like ChatGPT that feature essay writing technology, student academic integrity and creativity need to be preserved




In November of last year, OpenAI, an artificial intelligence (AI) lab in San Francisco, released a new chatbot called ChatGPT — which stands for “generative pre-trained transformer.” The tool’s launch has revolutionized automation of information and Generative AI, a field that enables its users to hold human-like conversations with a computer software and receive desired content generated by a bot.

From asking the AI software to develop business proposals in a matter of seconds to requesting it to “explain AI Alignment, but write every sentence in the speaking style of a guy who won’t stop going on tangents to brag about how big the pumpkins he grew are,” the chatbot seems like it can do it all. Some have even gone so far as to attest that similar chatbots like Google’s LaMDA have developed sentience, an awareness of emotions and their own existence.

While many tech gurus of Silicon Valley have praised ChatGPT’s introduction and advancements, educational institutions have raised ethical concerns regarding the threat that AI poses to education for both its students and teachers. Professors from colleges like the University of Northern Michigan have already caught students using the AI to cheat on essay assignments and have been forced to alter their curricula, incorporating more oral exams, first drafts handwritten in the classroom and on-campus restrictive browsers that prevent the use of the chatbot to write responses for them. 

In addition, ChatGPT can bypass safeguarding programs like Turnitin designed to detect plagiarism, since the chatbot generates its own content as if a student had written it. Using programs like this further appeals to students because it can save time and effort on class assignments, emails and even job applications. 

While it might be tempting to have your paper written on demand, reliance on AI would mean that skills such as crafting original, meaningful essays, developing cohesive arguments and expressing oneself authentically through writing would all be rendered useless in the classroom and out. This can also stifle nuanced thought and progressive discourse in education that may further jeopardize students’ intellectual development and success in future careers.

AI is also a threat to journalism. Because the chatbot has access to a large database and can closely imitate human writing styles, it can easily mass produce and quickly spread falsified narratives through public media. Misinformation on vaccines in the U.S. during the COVID-19 pandemic, for example, endangered the lives of millions, having sparked steadfast opposition to vaccination; imagine what ChatGPT can do on a global scale with time.

Search engines like Google and many who put out original content are at risk of being displaced by generative AI programs. And the field will only continue to grow, with investors and tech giants like Microsoft funding OpenAI and expecting $1 billion in revenue by next year. 

So with the rise of AI technology, could this mean the end of creativity and integrity in the public sphere?

The answer is no — as long as it is used responsibly. Regulations need to be put in place to prevent misuse of AI. On Jan. 30, Senator Bill Dodd introduced California’s first-ever AI-drafted resolution, showing that AI can be applied appropriately while expressing the state’s commitment to developing legislation and policies to ensure its safe use. 

Furthermore, Edward Tian, a student from Princeton majoring in computer science and minoring in journalism, developed a free program called GPTZero in response to the chatbot. It can detect whether a human or AI like ChatGPT wrote an essay. Over 30,000 teachers have reached out to him to use the platform and have only returned positive feedback upon testing it. Likewise, OpenAI plans on watermarking any of ChatGPT’s output to make plagiarism easier to spot, and Turnitin recently introduced its AI Innovation Lab as a countermeasure to AI-assisted work submitted by students.

While moves are being made to regulate AI usage, this does not mean we should completely reject the tool. We can still appreciate the technological advancement of ChatGPT, with its ability to draw relevant information from an abundance of datasets in the digital web and cater to a user’s needs immediately.

Just be responsible with how you use generative AI programs. Rather than blindly relying on it, improve and coexist with it. Be conscious and critical of the content you consume and produce. It may be tempting to let AI quickly write an essay or news article. But think about what you would lose out on and what risks you would be taking as you progress through your education or career — letting a chatbot do the work for you and worse, be you.


Written by: The Editorial Board