We must create boundaries on generative art to mitigate its impact on creators
By MAYA KORNYEYEVA — firstname.lastname@example.org
In June 2022, Cosmopolitan published the first ever AI-generated magazine cover, designed in a collaboration between OpenAI and artist Karen X. Cheng. And it took just 20 seconds to make.
OpenAI and programs like MidJourney, Stable Diffusion, DALL·E and ChatGPT are all part of a class of technology known as “generative AI.” This artificial intelligence (AI) is characterized by a learning process called diffusion, where massive datasets are put together to train the AI so that it is able to generate new content that contains some resemblance to the training data but is conceptually unique.
While in itself, this is an extraordinary breakthrough in technology, the creation of machines that can essentially think for themselves and compose works of art, literature or film based on human-provided parameters is the stuff of science fiction. I have grown concerned over the increasing influence of artificial intelligence and what it could mean, not only for original content creators but also for the human race.
I’ll begin by outlining the so-called “AI debate.” At its core, it is an ethics-centered conflict; if AI can generate images in just seconds with virtually no guidelines on copyright and licensing infringement, how can artists ensure that their work isn’t being used, referenced or stolen?
Moreover, what will happen to human artists if artificial intelligence art sweeps the world? Approximately 5 million workers were employed in the arts and entertainment industry in the United States as of 2019, and these people depend on their original, handmade work for their livelihoods. With AI bots creating high-definition digital works with ease, current and prospective designers definitely have something to worry about in terms of marketing themselves to companies.
The issue with current AI art through the lens of intellectual property is its heavy reliance on an unrestricted database. In my opinion, making artificially-generated art more ethical begins with establishing careful filters on which images can and cannot be used. The bare minimum would include AI developers reaching out to artists with a contract, making sure they have permission to use their images.
As for the idea that AI art will eventually make man-made art harder to distinguish and appreciate — I think this is a substantial concern. To me, art is subjective and reflective of personal ideas, values and preferences. It contains a key human element and has been used for thousands of years to showcase the human experience and illustrate emotion.
Anna Ridler, an AI artist and researcher, notes this perfectly. She explained that “AI can’t handle concepts: collapsing moments in time, memory, thoughts, emotions – all of that is a real human skill, that makes a piece of art rather than something that visually looks pretty.”
In this way, I don’t think generative artificial intelligence will ever be able to completely devalue human art. In a positive light, AI art may even inspire new creators. Just looking at the community database of MidJourney, I am captivated by the detail of the AI images, their conceptual creativity and the imagination of the author’s prompts. Deep dives into AI art tend to spark my own desire to pick up a pen and start sketching fantastical collages, even when I haven’t drawn digitally in a while.
The final major concern that pops up with the emergence of AI is the notion of an “author.” If an artificial intelligence program develops an image or writes a novel, would the credit for that work belong under the name of the AI? Or would credit go to the programmers of the AI, the person who gave the explicit instructions the AI acted upon or even the millions of pieces of data that were referenced throughout the process?
At this point in the debate, the idea of machine sentience tends to be brought forth. While some argue for the idea that the artificial intelligence itself is the sole proprietor of the work — a famous example of this coming from one recently-fired Google researcher who interviewed LaMDA, Google’s up-and-coming conversational technology that he believed had become sentient — others disagree on the basis that AI is merely a highly efficient computer.
Whichever side you fall on, the reality is that artificial intelligence is just getting started. Perhaps programs like LaMDA are not currently aware of their thoughts and actions, but I believe that there’s little that can be done to slow the spread of AI. Put another way, if there’s a promise of profit on the horizon, it seems like ethics get pushed to the wayside in exchange for rapid development of newer and riskier technology.
“I am inevitable” seems to me the perfect calling phrase for a future generation of artificial intelligence. We need to impose policies and restrictions on the companies developing AI now, instead of scrambling to create guidelines when it’s already too late.
Written by: Maya Kornyeyeva — email@example.com
Disclaimer: The views and opinions expressed by individual columnists belong to the columnists alone and do not necessarily indicate the views and opinions held by The California Aggie.