Artificial Intelligence (A.I.) has been a hot topic of late, especially on the heels of the release of ChatGPT. The rapid advancement in the sophistication of A.I. has raised a multitude of concerns, such as bias being incorporated into A.I.s and A.I.s potentially replacing human workers. The concerns are significant enough that over 500 tech experts, as well as Elon Musk and Steve Wozniak, signed an open letter calling for an immediate halt in advanced A.I. development. However, despite concerns of A.I. potentially gradually replacing humans in the future, A.I. development is forging ahead with GPT-5 already on the horizon.
While some may laugh at the idea that A.I. is sophisticated enough to replace human workers when it still stumbles on basic math and sometimes responds to questions with pure gibberish, it is actually already providing many obstacles in one industry. Artists are becoming increasingly concerned with the development of A.I. art, and their fears are valid. With text-to-image A.I., users can create a visual of any fantasy they have and, with cheap apps like Lansa, can see themselves transformed into works of art in seconds. However, as A.I. art continues to develop with major datasets constantly feeding images into them, the concerns are steadily rising.
The issues have grown far beyond users playing with A.I. images for fun. A.I. art is now winning art contests, invoking bias, promoting harmful standards of beauty, and even replacing porn sites with A.I. porn generators. Additionally, there is a known issue of Deepfake A.I. being used to create explicit images, predominantly of women, without their consent. While the issues with Deepfake A.I. are quite plain to see, A.I. art is also impacting artists with consequences that might be a bit more difficult to detect. Here is the A.I. art controversy, explained.
What is the A.I. art controversy?
A.I. art is controversial because of the impact it is having on real artists. The art world has already been labeled as “elite,” with the idea of the “starving artist” becoming the norm. It is a highly competitive industry that is difficult to break into and is plagued with inequality in relation to class, race, and gender. Hence, A.I. art is targeting an industry that is already filled with vulnerable workers and making things even more difficult for them. One of the major problems with A.I. art is how it is generated. Many A.I. art generators, such as Stable Diffusion, utilize data sets to produce more accurate text-to-image creations.
Stable Diffusion doesn’t always accurately translate text to imagery. When it first debuted, users could type a prompt for the A.I. to create a human, for example, and the generated individual could have misplaced body parts or extra limbs. This was because it is a learning model that needed to be trained on how to create associations between images and words. As a result, the generator uses a dataset called LAION-5B that consists of billions of image-text combinations to make Stable Diffusion more accurate. The problem with this is that private photos, such as medical record photos, have shown up in the database, with individuals lacking options to have them removed. Also, thousands of images have been uploaded from sites like DeviantArt and ArtStation without artists’ knowledge. When artwork from actual artists is being used to train generators like Stable Diffusion, it means it is utilizing these artists’ work and styles without giving them credit or compensation.
These A.I. art generators are often much cheaper than the prices that human artists must demand for the cost of their labor. This is why A.I. art is arising as a strong competitor in the industry. Stories have already started surfacing about A.I.-generated art winning first place at art competitions. This creates a serious problem of art competitors potentially cheating with A.I. art. A.I. art is also finding ways to displace other workers, as well. An Alexa feature can now use A.I. to create simple animated stories for children. In the future, the company aims to be able to create any animated story from any prompt a child suggests, which could significantly threaten children’s book creators. Sex workers are also becoming concerned about the rise of A.I.-generated porn.
The Lensa App controversy, explained
A recent example of the problem with A.I. art is the Lensa A.I. app controversy. Lensa A.I. was created by Prisma Labs and went viral back in December 2022. The app turns selfies into “magic avatars” using A.I. for a fee of $3.99 for trial use or $7.99 for one-time use. While even $3.99 seems like a steep price for an app that turns a selfie into a poorly made digital art rendering that hardly resembles the original picture, Lensa A.I. turned into a social media craze. #lensa currently has over 600 million views on TikTok as countless users took to the app to share their Lensa avatar creations.
Some users shared some Lensa “fails” in which their avatars turned out unlike what they expected. However, many of the users were genuinely delighted and shocked by the A.I. art, posting about how it made them more “beautiful” or “finer” than they ever thought they would be. So, the app essentially capitalizes on insecurities and unattainable beauty standards. What’s worst, though, is how the app defined beauty. Based on many of the Lensa examples, the app seemed to suggest that beauty was, well, white. A Black woman wrote about how the app provided her white husband with nearly 100 images that resembled him in a flattering manner. However, when she used it, she found most of her features washed away and replaced with Caucasian features and even some photos where her skin tone was changed to white.
Others individuals found that the app either made them white or exaggerated their features to make them resemble racial stereotypes. Images of women were hyper-feminized and sexualized, with the app changing their features and even body shape to match tired female beauty standards. This is a testimony to the bias that frequently creeps into A.I. when algorithms representing race and gender accurately are not included or when the data learning models are being fed is filled with bias. In addition to being racist and potentially harmful to body image, Lensa retained the right to utilize any of the A.I.-generated Avatars for whatever purpose. Unfortunately, over 4 million people downloaded the app in December and paid $3.99 for their problematic avatars, choosing Lensa over countless talented artists who can invoke a human element in their works that A.I. cannot.
What the A.I. art controversy means
The issues that A.I. art is creating are likely only the tip of the iceberg. There’s no telling how sophisticated this art can become and how much it will impact artists. Not only are artists having their work stolen, but some are even allegedly being falsely accused of creating A.I. art. One Redditor recounted how he was banned from an art Subreddit because the moderator claimed his image was A.I.-generated and that he was passing it off as his own. When he denied this, the moderator told him no one would believe him and he’d just have to find a new art style because “it’s the way of the world.”
So, even though countless artists have had their work and styles stolen by A.I., these artists are now expected to refrain from creating anything that resembles A.I.-generated art. Needless to say, it’s a concerning situation. However, some of these A.I. art generators, like Lensa or Stable Diffusion, also showcase the merit of having a human artist. While these generators will distort and twist individuals who aren’t straight or white, human artists can provide their services to everyone and can truly capture all of the qualities that make one unique and beautiful, instead of washing them away or turning them into something else.
(featured image: salihkilic / Getty Images)
Published: Jun 5, 2023 01:44 pm