44.7 F
Davis

Davis, California

Sunday, March 16, 2025

The successful, the worthless and the artificially intelligent

Exploring how society perpetuates the manifestations of its own fears about AI 

 

By VIOLET ZANZOT— vmzanzot@ucdavis.edu

 

Artificial intelligence (AI) — it does your homework, makes your workouts and explains whether you should “text them back” or not. It feels limitless, and if you do happen upon what feels like a limit, you can tell it to do it better, to be more succinct, more accurate, more precise — to sound more human.

In fact, AI is better than a human; it has all the answers, and you don’t have to worry about ChatGPT’s feelings.

Today, I am not here to talk about AI in an operational sense. To be perfectly candid, I can only use a computer for exactly what one would use a DVD player, a piece of paper or a book of facts. That is to say, my personal knowledge and interest in technology do not extend further than watching television shows and movies, writing, reading and looking random things up.

I would much rather discuss the “people” side of AI and the conversation surrounding the integration of this way of knowing into our society, specifically in the world of education. Why are educators so afraid of AI? Is it, perhaps, not the fault of the student or the technology? Does the real fear manifest as a result of the very people who create it?

I think yes. The reason we should be wary of AI is not because students could use it as a crutch but because they could use it as a wheelchair. More importantly, we should be cautious that the reason students may rely on this support is not because they are lazy, but because there is an increasing pressure for success.

“By whatever means necessary” — that is the mantra instilled in today’s anxious world. While that may not be the vocabulary used by educators, the expectations within education emphasize quantity over quality: quantity of correct answers, rather than quality of correctness.

The two concerns with AI I find to be most demonstrative of this point are the fears about cheating and the loss of critical thinking skills — of course, these go hand-in-hand. Academic dishonesty operates in ever new and exciting ways in the wake of the AI revolution. Because of the way AI is able to answer questions and reproduce language, no field of study can avoid the potential replacement of student work by AI-generated content.

The phobia of forgery, plagiarism and inauthentic work has greater implications that all point to a fear of “cognitive offloading.” Cognitive offloading is the idea that AI will replace critical thinking. Despite the benefits of AI increasing efficiency and accessibility, is it worth it if we lose cognitive skills in the process?

I think that question is derived not from problems with AI, but from the foundational structure of success that exists in our social system, which is, more importantly, built into our schools. From the very beginning, we are taught the A, B, Cs and Fs. We are taught grades. We are taught success. We are taught failure.

I like how Gustav Ichheiser phrases this concept — he describes the notion that “the norms of success in our society require that those individuals [who] ‘ought’ to attain success are competent and worthy, and, to formulate it negatively, the incompetent and unworthy should be denied success.” He details the American system of success, which forces a need for either accuracy or failure.

When you incorporate a tool that can help create accuracy and precision into a society that finds worthiness in those who can be accurate and precise and awards them with success, are we not bound to abuse this system for the sake of getting ahead, especially considering that  “ahead” is where we find value?

I am not here to argue about the wonders of AI. Not to say that there aren’t any, but it certainly is not my wheelhouse, nor do I find it to be deeply captivating. It is important to me, and hopefully to you, that we recognize the “dangers” of AI may be more about our confrontation with our own society’s perception of worth and success rather than a fear about its technological capacity. Perhaps we should worry less about the ways in which people can manipulate technology for personal gain and more about why people feel it is necessary to do so to begin with.

 

Written by: Violet Zanzot— vmzanzot@ucdavis.edu

 

Disclaimer: The views and opinions expressed by individual columnists belong to the columnists alone and do not necessarily indicate the views and opinions held by The California Aggie.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here