Through June 2024, many artists posted screenshots of their new profiles from Cara, an art portfolio app, on Instagram in protest against Meta’s updated policies involving AI training.
Coalitions of content creators flocked together in solidarity against Meta and planned a mass exodus from Instagram in response. These artists are angry; this information came way too late when the AI training originally started, and all of this comes without consent and no alternative.
Meta’s policy change indicates they may legally use data collected through their apps—Facebook, Instagram, etc.—to train their AI software. Artists’ primary concern is that their work, publicly shared via Facebook and Instagram for the past decade and in the future, is now being used to train software that can mimic their art style, repurposing their work without compensation.
Artistic style is a nuanced, individualist concept that is not often as easily defined as “intellectual property.” Human subjectivity can articulate impressions of art that internally differentiate one artist from another, one writer from another. This is how we can distinguish Van Gogh from Claude Monet, Pixar from Dreamworks, bell hooks from Roxanne Gay, or as far back as Aristotle from Plato. STYLE. FLARE. PERSPECTIVE. It is what makes art and thought unique.
In a world of AI software and machine learning based on data gathered from artists, poets, and activists whose work is defined by their individualism, how do we reward their talent and ambition when artificial imitation of their individuality is being broadcast as original to someone or something else? The human mind and body can take days, weeks, or months to create their masterpiece, but AI takes minutes to replicate and dilute that niche.
In some countries, Meta may have been required to add an opt-out option from AI training in Instagram’s settings. However, it isn’t as simple as clicking a button but a far more bureaucratic back and forth with Meta customer service reps to acquire opt-out status.
AI is not a tool for artists; it threatens their livelihood.
However, this is only the most recent controversy regarding AI and its effect on humanity. Recently, Google and Adobe were caught up in a similar battle with artists. Adobe was in hot water for announcing its new AI feature and how its vague policy threatened to train AI from artists’ work. Although the company quickly addressed this when many threatened legal action, Adobe’s policy has since been updated to clearly state and recognize the intellectual property of their artists’ work.
A similar issue arose with Google when its policy was updated to include AI training beside its Google Translate software, which has long trained using user data. Though Google’s blog post reassures users that their work using AI is protected from potential copyright infringements, the rights of users’ work being used for AI training are not addressed. This again leans into the infringement on artistic expression, style, and individuality, not to mention the right to be compensated for the use of their work.
To some, the solution is Nightshade: “an offence tool to distort feature representations inside generative AI image models.” Essentially, Nightshade is a program that writes unquantifiable data amidst the fabric of an artist’s digital work to confuse AI training models gathering data from unlicensed art. However, this is just a Band-Aid on an issue that keeps learning to work around human design.
If anything, all this teaches us to brush up on our vocabulary and read every Terms and Conditions agreement that pops up when we open a new app. For too long, we’ve been docile and clicking “I agree” to contracts we don’t understand. Still, with the speed at which these new features are being released, government regulations can’t keep up with companies finding new ways to wade into the grey area of ethical practice.
Published: Jul 18, 2024 12:59 pm