75.3 F
Davis

Davis, California

Saturday, August 31, 2024

At The Crossroads: The importance of humanity to artificial intelligence

CYRTERIA [CC BY-NC 3.0 US] / DERIVATIVE WORK: MORNINGLEMON / CREATIVE COMMONS
CYRTERIA [CC BY-NC 3.0 US] / DERIVATIVE WORK: MORNINGLEMON / CREATIVE COMMONS
AI without ethics could prompt the downfall of mankind

Well, Aggies, we’re rapidly approaching the scariest day of the year again. (I’m of course referring to Halloween and not the 2016 presidential election, although at this point the two are nearly interchangeable).

It’s only fitting, then, that we celebrate by exploring the spooky uncertainties surrounding artificial intelligence (AI) — a rapidly-developing technology that Stephen Hawking chillingly predicted could potentially end humanity.

Once a mere dream penned by sci-fi writers, AI is now deeply imbedded in our daily lives, from Google’s self-driving cars to computerized personal assistants like Siri. Most autonomous machines offer societal advantages, like increased space and ocean exploration, safer work environments, stronger health care and even better recommendation engines on streaming sites like Spotify and Netflix.

But Hawking probably isn’t worried about an AI that enables your binge-watching habit somehow becoming a sentient killing machine.

One of the most disturbing predictions for the development of AI is computer scientist and mathematician Vernor Vinge’s hypothesis of the “Singularity” — an eventual burst of accelerated and uncontrolled AI advancement that will leave slow-evolving, intellectually-inferior humans in its dust.

Vinge argues that superintelligent AIs will one day advance themselves beyond our species’ comprehension, dethroning humans as Earth’s reigning creatures.

Nathan, the reclusive genius in Alex Garland’s 2015 thriller Ex Machina, agrees with Vinge’s theory, declaring: “One day the AIs are going to look back on us the way we look at fossil skeletons on the plains of Africa: an upright ape living in dirt with crude language and tools, all set for extinction.”

Still, as the CEO of a software company, Nathan strives to push the limits of AI, recruiting a young programmer, Caleb, to administer the Turing test — a measure of a machine’s ability to pass as a convincing human — to Ava, Nathan’s AI creation in the movie.

Ava’s kindness, wit and beauty gradually convince Caleb (and the audience) that she’s not just a humanlike robot, but a complex, emotional and virtuous individual almost indistinguishable from a genuine human being.

But Ava is anything but genuine. To break free from her imprisonment, she’s been manipulating Caleb all along, mercilessly exploiting his sexual attraction to her while capitalizing on Nathan’s arrogance, drunkenness and misogyny. Sadly, poor Caleb is as blind to her facade as Trump is to his dwindling poll numbers.

So, after finding Nathan’s horrifying closet full of the lifeless and dismembered bodies of former AIs, Caleb ardently decides to help Ava escape from her captor. The film ends with Ava murdering Nathan, coldly abandoning Caleb in the locked facility and blending in with the faces of a bustling city, dressed in the lifelike skin of terminated AIs.

Ava’s ruthless determination exemplifies skeptics’ concerns about the decision-making processes of digital-intelligence systems. After all, Ava isn’t explicitly malicious — she’s simply surviving in accordance with her programmed goals. But without a moral compass, Ava can calmly overstep ethical boundaries inherent to humankind.

Nick Bostrom, a philosopher and physicist, explained in an interview with Slate that highly-advanced AIs can corrupt even the simplest tasks.

Say, for example, that an AI is programmed to harmlessly manufacture paper clips at maximum efficiency. Bostrom asserted that, because humans can switch off AIs and consequently halt the production of paper clips, a superintelligent AI might rationally kill any person posing a threat to their mission.

Therefore, with more dangerous objectives and a continual lack of moral conscience, superior AIs could potentially eradicate the human race.

Fortunately, many leading developers are attempting to counter the existential risks of AI.

The billion-dollar non-profit OpenAI researches and promotes the development of friendly AI — systems that benefit rather than harm mankind. Backed by Elon Musk, the CEO of Tesla who deemed AI “potentially more dangerous than nukes,” the project aims to incorporate morality into the algorithms of intelligent systems.

Stanford’s One Hundred Year Study on Artificial Intelligence further works to predict AI’s effects on all sectors of society, encouraging cooperation between computer scientists and other thinkers, such as philosophers, psychologists, doctors and political scientists.

Collaborations like these are vital to ensure that AI progresses enough to fulfill its potential as an extraordinary invention — without progressing so far as to become our last one.

 

Written by: Taryn DeOilers — tldeoilers@ucdavis.edu

 

Disclaimer: The views and opinions expressed by individual columnists belong to the columnists alone and do not necessarily indicate the views and opinions held by The California Aggie.

LEAVE A REPLY

Please enter your comment!
Please enter your name here