It’s been a bit over a year since the initial explosion in popularity of generative AI apps. In February and March of 2023, everyone’s social media feeds were awash in brightly-colored machine-generated doodles, fake “songs” produced in the style of popular recording artists, psychedelic, constantly-shifting, potentially nausea-inducing brief animated clips, and of course, lengthy text threads about the exciting technological breakthroughs that made them all possible.
One year later, AI remains a very big and prominent business that continues to encroach on many, many other businesses and creative pursuits. Just this week, Amazon put another $2.75 billion into the AI startup Anthropic, in its largest venture investment to date; the Wall Street Journal reported on the industry’s war for talent that’s earning some AI experts million-dollar-per-year compensation packages; and the BBC highlighted attempts to integrate the tech on nearly 2 million American farms.
Still, the initial exuberance around AI and all of its various applications has given way to a good deal of cynicism as well, particularly when it’s deployed in artistic or creative projects. Last week, social media discovered that the producers of the indie horror film “Late Night with the Devil” used AI apps to generate some imagery that appears on screen throughout the movie. Though the filmmakers – siblings Colin and Cameron Cairnes – clarified that the AI was used purely as an experiment, and the images were retouched by a human graphics and production team before being placed in the film, many on X, Instagram and elsewhere still criticized the move as a betrayal of artists and an attempt to save money by cutting corners, while some threatened a boycott.
It’s a similar response to the one mogul and filmmaker Tyler Perry received back in February when he put plans to expand his Atlanta studio on hold after playing around with OpenAI’s text-to-video Sora software. In an interview with The Hollywood Reporter, Perry suggested that a wait-and-see approach to AI makes more sense than [try]ing to put in guardrails and try[ing] to put in safety belts to keep livelihoods afloat.” While some pundits argued he was simply being a responsible businessman, others dismissed his concerns as “hysterical.”
In truth, just about any social media post about creative uses for AI, or that presents the outputs of AI apps, will likely receive negative responses, while sharp criticisms of the technology trend daily and have become commonplace online.
The backlash largely stems from three core observations:
The AI apps were overhyped and failed to deliver on their initial promises
This week, Axios referred to the current state of AI as a “trough of disillusionment,” as users recognize that the initial burst of excitement around the technology may have oversold its capabilities. Google’s Gemini AI tool provided a big-picture and extremely visible example earlier this year, producing images that even Google senior vice president Prabhakar Raghavan termed “embarrassing, inaccurate, or offensive results.” (The app produced images including African-American founding fathers, a female pope, and gay couples when prompted for images of straight couples, leading some on the right to declare it hopelessly “woke” and even “racist and anti-civilizational.”)
Not all of the recent, practical criticisms of AI tools have been political. Cognitive scientist Gary Marcus and others have been pointing out for a while now that, while AI apps are “cool” and produce content that garners interest on social media and elsewhere, they’re not yet particularly useful, and may remain too unreliable to fully integrate into our work and/or social lives.
Stability AI, the company behind the Stable Diffusion image generation software, has seen a wave of staff departures and shrinking cash reserves and just lost its CEO Emad Mostaque this week.
Meanwhile, the responses from the companies and executives behind the big AI apps are unchanging: it’ll keep getting better and smarter all the time. Just you wait.
Generative AI apps are built on stolen work and plagiarism
It’s well established that large language models, text-to-image generators, and other kinds of AI applications are trained by feeding in hundreds of thousands or even millions of prior examples that their results are intended to mimic. They’re fed a massive amount of text, artwork, designs, and imagery that was originally conceived of and created by humans, many of whom hold the copyright on their original work and were not compensated or even informed that their work was being used to train a computer system.
Copyright lawsuits based on these issues are currently working their way through the US justice system, while AI companies search for legal avenues to potentially obtain the vast corpus of data they need to produce next-generation software. Reddit, for example, recently struck a $60 million deal with Google to use its massive data trove for AI model training.
For many observers, using AI apps to complete a creative project is tantamount to plagiarizing or scraping another artist’s work yourself, with the technology merely serving as an intermediary.
AI is a generally negative trend that should not be encouraged
The most common anti-AI refrains on social media and elsewhere simply view the technology as an enemy: of working people, clear and accurate communication, and of creativity. The more individuals feel encouraged to employ it as a tool or a shortcut, the more companies feel safe investing in AI apps over new human staffers, the more difficult it becomes to discern AI-generated content from the human-produced variety, the worse things will get for society, technology, and culture.
Generative AI apps that make imagery, for example, don’t actually produce anything original or unique. They’re producing new images based on images they’ve already scanned, attempting to reproduce the kinds of artwork that humans have made. But this leads to a narrowing and smoothing-out of the creative process, removing anything imaginative from images and simply reproducing the same static types of content that have already been produced in the past. Over a long enough timeline, it threatens to dilute or extinguish the creative spark that makes art worth engaging with in the first place.
Still, despite these and other drawbacks, for many, AI apps remain an intriguing novelty. They can genuinely simplify several extended and even arduous processes, and many of them do have an immediate “cool” factor. It seems unlikely they’re going to go away for good any time soon, and even many artists, innovators, and creators are still tempted to give new releases or products a spin.
Back in February, Passionfruit wrote about creators using AI apps to save time and limit stress, putting them at less risk of the dreaded burnout. TikTok owner ByteDance has been investing millions in growing its generative AI divisions, to keep up with demand for new tools and features. Just this week, OpenAI gave a sneak preview of its latest text-to-video model Sora to a group of ad agencies, musicians, artists, and filmmakers to play around with, all of whom sang the praises of the technology and what it can achieve. Notably, this project has already seen some backlash: former Stability exec Ed Newton-Rex tweeted that this was a case of “artist-washing,” callously using creatives to hawk software that was trained on the unlicensed work of other artists.
For now, despite the rampant negativity toward these tools, many creators still find them too tantalizing and promising to disregard entirely. Perhaps OpenAI CEO Sam Altman is right, and they just need a bit more time to figure it all out.