Learn more about how AI works, people’s fears around it and how it affects human lives
By SABRINA FIGUEROA — features@theaggie.org
Advancements in artificial intelligence (AI) have taken the world by storm, causing some to live in fear and others to embrace how it will affect society’s future.
AI has developed quite further and faster than society could have imagined. It has gotten to a point where AI is being used to generate “deepfakes” that people can no longer always differentiate from reality, which can have repercussions in both politics and personal lives.
This has fueled the fear of many. Even Geoffrey Hinton — known as “the Godfather of AI” — reportedly told the New York Times that one of his concerns is that the internet could possibly be flooded with false images, text and videos to the point where the average person wouldn’t be able to identify what is true anymore.
So, how did we get to this point? How did AI develop so quickly in such a short amount of time?
Martin Hilbert, a communication professor and chair of the designated emphasis in computational social science at UC Davis, said there are two reasons for its fast development.
“The amount of data [produced before and during the COVID-19 pandemic] together with the trick of using GPUs, which gave us the computational power, lead to this jump in parameters and made them more useful,” Hilbert said.
GPUs are the latest graphics processing units, which have opened new possibilities in gaming, content creation, machine learning and more.
Artificial intelligence runs on data, and the more data it gets, the better it becomes at predicting and generating human-like work and behavior. When the pandemic hit, and people yearned for human connection, society turned to social media and social-simulation video games online as a mode of communication with other people they couldn’t spend time with in person. This then created a plethora of digital footprints and data that AI now utilizes.
Because data is power, this causes a big dilemma in the AI world. Since human society is filled with bias and discrimination, so is a lot of the data AI utilizes. Left unchecked, AI can help contribute to misinformation or exhibit those same discriminatory biases that humans have.
“If we give [AI] data based on where [society is] right now [and in the past] — living in a very sexist and racist society — that is what [AI] will reproduce and that’s what it does reproduce. That isn’t the machine’s fault, it’s just the data that we have,” Hilbert said.
Even so, it’s not difficult to factor out these discriminatory variables so that AI doesn’t use them. All it takes is good prompt engineering skills.
“It’s extremely difficult to eliminate human bias in the brain, it’s just a bandwidth capacity problem,” Hilbert said. “However, you can get it out of machines. So you can tell a machine, ‘Consider all the variables you want, but do not consider gender. Make sure that in regards to gender, the outcome is completely neutral,’ for example. The machine will be able to optimize that.”
As machine learning becomes more advanced, society is now seeing that it can help aid human cognition and labor. In some cases, it has already surpassed the abilities of humans — such as standardized testing.
“For the SAT, in reading, writing and math, humans score [in the] 65th [percentile] and [GPT-4] gets up to 90,” Hilbert said.
It doesn’t stop at SAT either. GPT-4, a multimodal large language model created by OpenAI, has also passed professional exams, such as the Bar Exam that law graduates take to become attorneys. GPT-4 was able to pass the multiple-choice portion as well as both of the written portions of the bar exam, scoring in the 90th percentile — exceeding the average human score.
Hilbert also suggested that emotional empathy in artificial intelligence may already be better than the human capability of empathy.
“If we use a machine to consult a primary care physician, for example, people evaluate the machine to be much more empathetic, [especially in] emotional listening. It’s not a robot, it’s something very intimate,” Hilbert said. “In terms of knowledge and emotional intelligence, [AI] outperforms us already, and it can always become better.”
However alarming this may sound, AI is still not perfect. In fact, OpenAI announced that their large language model is still less capable than humans in other scenarios, though it outperformed humans on the SAT and other professional exams. Its biggest problem is that it tends to make things up and insists it’s right when it’s not.
Despite its seemingly helpful advancements, some continue to see AI’s power in knowledge and other skills as a threat to education and the workforce.
At almost every university, including UC Davis, professors note in their syllabi that the use of AI when completing assignments is prohibited unless otherwise instructed. To regulate its utilization, professors usually enforce AI detection programs to hold students accountable. Yet, none of those programs can reliably detect AI-generated text, and the most popular program that universities use, Turnitin, even admitted that it detects a lot of false positives.
Lizette Torres-Delgado, a second-year political science and public service major, spoke about her experience with AI at UC Davis.
“I haven’t used AI because many of the professors that I’ve had are really against it, and I’d also rather do research myself to know more about a topic I need to write about.”
In the midst of this, there are some professors who make it mandatory for students to explore AI and become familiar with its functions.
“I make it mandatory in my classes, and that doesn’t make me very popular with my colleagues,” Hilbert said. “I [enforce the use of AI because] I just think it’s unfair if 10% of students use it and the other 90% don’t. It’s also why I ask more of my students now, [they can now] do more thanks to the [AI] assistantance students have.”
Additionally, some students use ChatGPT and other large language models in a way that wouldn’t be classified as “cheating,” but simply as a complementary aid.
“I have used ChatGPT and also Notion’s built-in AI and I’ve really enjoyed it since it helps me come up with new ideas or helps me understand things more clearly whenever I am confused or lost [on a topic]. AI is a great tool to help students brainstorm in my opinion,” Alejandra Velasco, a second-year computer science major, said.
In her experience, AI benefits educational spaces rather than threatens them.
“Personally, I think teachers should be open to using [AI] more. If students decide to use this tool to cheat, then it’s on them and it will reflect on their exams and career,” Velasco said. “If everyone finds a productive way of using it, then it can help benefit students and prevent them from cheating more.”
Apart from education — whether people see it as a threat or not — AI and other digital technology have been used in the workforce as well, causing another wave of concerns: what if AI were to replace humans in the labor market?
According to a study done by the McKinsey Global Institute, health, STEM, transportation, warehousing, business and legal professionals are projected to be growing under AI, while office support, customer service, sales, production work and food services are the most negatively impacted by AI acceleration.
The research found that the labor market saw 8.6 million occupational shifts with most people leaving food services, in-person sales and office support for other occupations from 2019 to 2022. The study also suggests that this pattern will continue into the future due to AI’s impact.
Although AI will still change the way many jobs work — such as having AI assist humans in their tasks — there are still things that give society hope that we will not be replaced.
“It all comes down to consciousness. [Humans] can hold consciousness without thinking and feeling. For example, this is what happens when you meditate. Machines cannot do that, as far as [researchers and scientists] know,” Hilbert said. “Thinking is an information process, and consciousness is something different.”
Without consciousness, AI is still not capable of doing everything humans can do, meaning our jobs won’t be replaced by AI just yet.
AI definitely comes with repercussions, but it’s important to understand that the machine itself is not responsible. Any kind of digital technology is not inherently good or bad, nor neutral, the responsibility in the way it’s used falls in the hands of us humans. In other words, it’s socially constructed in that humans shape technology, not the other way around.
Hilbert suggested that people use AI based on their different interests and intentions, but that “it’s always the human at fault” whether it comes down to the data AI uses or the way it’s used because the machines “have no agency” in the decision of what they are used for.
With all this being said, AI is also used for things like helping people connect with others. For example, SignAll, a company based in Hungary, created a machine that is able to read American Sign Language and translate it into spoken and written language in an attempt to break communication barriers between people.
But this is just one example out of many, ranging from saving the bees and using it to predict climate change’s effects to cancer screenings and reducing inequality and poverty.
However terrifying it may feel to think about the direction we are heading in with AI and other digital technology, it’s best that society continues to talk about it, research it and be open to adapting to it. It can be useful while being regulated by laws and governments.
“Be mindful about [AI], talk about it, use it and then empower yourself with it; have a companion on your side that can help you [personally] and can help you shape the world,” Hilbert said. “If there are problems that you aren’t satisfied with [in the world], then we can use these tools to help make it a better place.”
Written by: Sabrina Figueroa — features@theaggie.org